Bash, display processes in specific folder - linux

I need to display processes, that are running in specific folder.
For example, there are folders "TEST" and "RUN". 3 sql files are running from TEST, and 2 from RUN. So when I use command ps xa, I can see all processes, runned from TEST and RUN together. What I want is to see processes, runned only from TEST folder, so only 3. Any commands, solutions to do this?

You can use lsof for this.
lsof | grep '/path/of/RUN'.
If you want to include both RUN and TEST in same command
lsof | grep -E "/path/of/RUN|/path/of/TEST"
Hope it helps.

You can try fuser to see which processes have particular files open; or, on Linux, examine the /proc/12345/cwd symlink for each of the candidate processes (replace 12345 with the process id of each).
fuser TEST/*.sql
for proc in /proc/[1-9]*; do
readlink "$proc/cwd" | grep -q TEST && echo "$proc"
done
The latter is not portable to other U*xes, though some may offer similar facilities.

Related

Get files used by a binary

I am trying to locate a file used by a binary file during its execution. Using strace helps but its way too convoluted, macroed with grep is good enough, but does there exist an utility which can help me dump only files used by a binary?
you can try using:
lsof -p PID of the running process
lsof -c ssh would show all files opened by processes starting with the letter
Or try ltrace or maybe fuser
I've seen strace be used with some complex grep piping.. but it all depends on what exactly the end goal is.
You can also utilize the -e options in strace to filter, example is:
sudo strace -t -e trace=open,close,read,getdents,write,connect,accept whoami >/dev/null
and grep from there..

Linux bash script that kills a process (not started by me) after x amount of time

I'm pretty inexperienced with Linux bash. That being said, I have a CentOS7 machine that runs a COTS application server. This application server runs other processes that sometimes hang. Since I have no control over the start of these processes, I'm looking for a script that runs every 2 minutes that kills processes of the name "spicer" that have been running for longer than 10 minutes. I've looked around and have only been able to find answers for processes that are run and owned by me.
I use the command ps -eo pid, command,etime | grep spicer to get all the spicer processes. The output of this command looks like:
18216 spicer -l/opt/otmm-10.5/Spi 14:20
18415 spicer -l/opt/otmm-10.5/Spi 11:49
etc...
18588 grep --color=auto spicer
I don't know if there's a way to parse this directly in bash. I'm also not well-versed at all in other Linux tools. I know that awk (or gawk) could possibly help.
EDIT
I have no control over the data that the process is working on.
What about wrapping the executable of spicer and start it using the timeout command? Let's say it is installed in /usr/bin/spicer. Then issue:
cp /usr/bin/spicer{,.orig}
echo '#!/bin/bash' > /usr/bin/spicer
echo 'timeout 10m spicer.orig "$#"' >> /usr/bin/spicer
Another approach would be to create a cronjob defintion into /etc/cron.d/kill_spicer. Like this:
* * * * * root kill $(ps --no-headers -C spicer -o pid,etimes | awk '$2>=600{print $1}')
The cronjob will get executed minutely and uses ps to obtain a list of spicer processes that run longer than 10minutes and passes them to kill.
Probably you even want kill -9 if the process is hanging.
You can use the -C option of ps to select processes by name.
ps --no-headers -C spicer -o pid,etime
Then you can use cut to filter the results, if the spacing is consistent. On my system the pid field takes up 8 characters, so I'd use
kill $(ps --no-headers -C spicer -o pid,etime | cut -c-8)
If the spacing is inconsistent (but if so, what kind of messed up ps are you using? :-P), you can use awk { print $1 } instead of cut.

Bash script commands not running

I have seafile (http://www.seafile.com/en/home/) running on my NAS and I set up a cron tab that runs a script every few minutes to check if the seafile server is up, and if not, it will start it
The script looks like this:
#!/bin/bash
# exit if process is running
if ps aux | grep "[s]eafile" > /dev/null
then exit
else
# restart process
/home/simon/seafile/seafile-server-latest/seafile.sh start
/home/simon/seafile/seafile-server-latest/seahub.sh start-fastcgi
fi
running /home/simon/seafile/seafile-server-latest/seafile.sh start and /home/simon/seafile/seafile-server-latest/seahub.sh start-fastcgi individually/manually works without a problem, but when I try to manually run this script file, neither of those lines execute and seafile/seahub do not start
Is there an error in my script that is preventing execution of those 2 lines? I've made sure to chmod the script file to 755
The problem is likely that when you pipe commands into one another, you don't guarentee that the second command doesn't start before the first (it can start, but not do anything while it waits for input). For example:
oj#ironhide:~$ ps -ef | grep foo
oj 8227 8207 0 13:54 pts/1 00:00:00 grep foo
There is no process containing the word "foo" running on my machine, but the grep that I'm piping ps to appears in the process list that ps produces.
You could try using pgrep instead, which is pretty much designed for this sort of thing:
if pgrep "[s]eafile"
Or you could add another pipe to filter out results that include grep:
ps aux | grep "[s]eafile" | grep -v grep
If the name of this script matches the regex [s]eafile it will trivially always take the exit branch.
You should probably be using pidof in preference of reinventing the yak shed anyway.
turns out the script itself was working ok, although the change to using pgrep is much nicer. the problem was actually in the crontab (didn't include the sh in the command)

Shell script to avoid killing the process that started the script

I am very new to shell scripting, can anyone help to solve a simple problem: I have written a simple shell script that does:
1. Stops few servers.
2. Kills all the process by user1
3. Starts few servers .
This script runs on the remote host. so I need to ssh to the machine copy my script and then run it. Also Command I have used for killing all the process is:
ps -efww | grep "user1"| grep -v "sshd"| awk '{print $2}' | xargs kill
Problem1: since user1 is used for ssh and running the script.It kills the process that is running the script and never goes to start the server.can anyone help me to modify the above command.
Problem2: how can I automate the process of sshing into the machine and running the script.
I have tried expect script but do I need to have a separate script for sshing and performing these tasksor can I do it in one script itself.
any help is welcomed.
Basically the answer is already in your script.
Just exclude your script from found processes like this
grep -v <your script name>
Regarding running the script automatically after you ssh, have a look here, it can be done by a special ssh configuration
Just create a simple script like:
#!/bin/bash
ssh user1#remotehost '
someservers stop
# kill processes here
someservers start
'
In order to avoid killing itself while stopping all user's processes try to add | grep -v bash after grep -v "sshd"
This is a problem with some nuance, and not straightforward to solve in shell.
The best approach
My suggestion, for easier system administration, would be to redesign. Run the killing logic as root, for example, so you may safely TERMinate any luser process without worrying about sawing off the branch you are sitting on. If your concern is runaway processes, run them under a timeout. Etc.
A good enough approach
Your ssh login shell session will have its own pseudo-tty, and all of its descendants will likely share that. So, figure out that tty name and skip anything with that tty:
TTY=$(tty | sed 's!^/dev/!!') # TTY := pts/3 e.g.
ps -eo tty=,user=,pid=,cmd= | grep luser | grep -v -e ^$TTY -e sshd | awk ...
Almost good enough approaches
The problem with "almost good enough" solutions like simply excluding the current script and sshd via ps -eo user=,pid=,cmd= | grep -v -e sshd -e fancy_script | awk ...) is that they rely heavily on the accident of invocation. ps auxf probably reveals that you have a login shell in between your script and your sshd (probably -bash) — you could put in special logic to skip that, too, but that's hardly robust if your script's invocation changes in the future.
What about question no. 2? (How can I automate sshing...?)
Good question. Off-topic. Try superuser.com.

How find out which process is using a file in Linux?

I tried to remove a file in Linux using rm -rf file_name, but got the error:
rm: file_name not removed. Text file busy
How can I find out which process is using this file?
You can use the fuser command, which is part of the psmisc package, like:
fuser file_name
You will receive a list of processes using the file.
You can use different flags with it, in order to receive a more detailed output.
You can find more info in the fuser's Wikipedia article, or in the man pages.
#jim's answer is correct -- fuser is what you want.
Additionally (or alternately), you can use lsof to get more information including the username, in case you need permission (without having to run an additional command) to kill the process. (THough of course, if killing the process is what you want, fuser can do that with its -k option. You can have fuser use other signals with the -s option -- check the man page for details.)
For example, with a tail -F /etc/passwd running in one window:
ghoti#pc:~$ lsof | grep passwd
tail 12470 ghoti 3r REG 251,0 2037 51515911 /etc/passwd
Note that you can also use lsof to find out what processes are using particular sockets. An excellent tool to have in your arsenal.
For users without fuser :
Although we can use lsof, there is another way i.e., we can query the /proc filesystem itself which lists all open files by all process.
# ls -l /proc/*/fd/* | grep filename
Sample output below:
l-wx------. 1 root root 64 Aug 15 02:56 /proc/5026/fd/4 -> /var/log/filename.log
From the output, one can use the process id in utility like ps to find program name
$ lsof | tree MyFold
As shown in the image attached:

Resources