Get PID in bash file with open screen - linux

I am a beginner in bash programming. I want to obtain PIDs from processes, in order to use trap and kill to receive and send signals to a program in the same file.
In particular, I start the program opening a screen in this way:
screen -d -m "start program"
process_id=`/bin/ps -fu $USER| grep "program" | grep -v "grep" | awk '{print $2}'`
The variable process_id contains two PIDs, not one. If I run without a screen, I don't have this issue (anyway, I have to open the screen).
Does anyone have solutions to this problem?
Another question: If I write
screen -d -m "start program">log
the log file isn't printed. Any suggestions?

For your first question, pgrep(or process grep) is what you are looking for.
For instance, the following will return a list of PIDs of all bash processes running.
preg bash
And if you read the docs:
-signal
Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used.
Second question, you could either use the -LogFile flag if your version of screen supports it. Or specify the log file in your .screenrc configuration file.
This has already been answered.
Edit:
If you can't access the user's home directory where the configuration file .screenrc is usually put, you could change the $SCREENRC environment variable to explicitly set to an alternative path for it.

Related

Linux shell wrap a program's stdin and stdout using pipes

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

bash -- kill command script [duplicate]

This question already has answers here:
Find and kill a process in one line using bash and regex
(30 answers)
Closed 6 years ago.
I'm looking into writing shells scripts as a prerequisite for a class and would like some help to get started. I'm currently doing a warm up exercise that requires me to write a shell script that, when executed, will kill any currently running process of a command I have given. For this exercise, I'm using the 'less' command (so to test I would input 'man ps | less').
However, since this is the first REAL script I'm writing (besides the traditional "Hello World!" one I've done), I'm a little stuck on how to start. I'm googled a lot and have returned some rather confusing results. I'm aware I need to start with a shebang, but I'm not sure why. I was thinking of using 'if' statement; something along the lines of
if 'less' is running
kill 'less' process
fi
But I'm not sure of how to go about that. Since I'm incredibly new at this, I also want to make sure I'm going about writing a script correctly. I'm using notepad as a text editor, and once I've written my script there, I'll save it to a directory that I access in a terminal and then run from there, correct?
Thank you very much for any advice or resources you could give me. I'm certain I can figure out harder exercises once I get the basics of writing a script down.
Try:
pgrep less && killall less
pgrep less looks process ids of any process named less. If a process is found, it returns true in which case the && clause is triggered. killall less kills any process named less.
See man pgrep and man killall.
Simplification
This may miss the point of your exercise, but there is no real need to test for a less process running. Just run:
killlall less
If no less process is running, then killall does nothing.
Try this simple snippet:
#!/bin/bash
# if one or more processes matching "less" are running
# (ps will return 0 which corresponds to true in that case):
if ps -C less
then
# send all processes matching "less" the TERM signal:
killall -TERM less
fi
For more information on available signals, see the table in the man page available via man 7 signal.
You might try the following code in bash:
#Tell which interpreter will process the code
#!/bin/bash
#Creating a variable to hold program name you want to serach and kill
#mind no-space between variable name value and equals sign
program='less'
#use ps to list all process and grep to search for the specific program name
# redirect the visible text output to /dev/null(linux black hole) since we don't want to see it on screen
ps aux | grep "$program" | grep -v grep > /dev/null
#If the given program is found $? will hold 0, since if successfull grep will return 0
if [ $? -eq 0 ]; then
#program is running kill it with killall
killall -9 "$program"
fi

Is a script called somewhere

On one of my linux servers I have a script that performs some controls.
Is there a way of finding out where this script is called? This can be in
another script, cobol program, crontab, ...
Opening every one of them will take a very long time.
If you can modify the script, put in a ps line to get the parent pid, ps again and grep for the parent pid to get the command, then log to file.
Come back in a week or so and you should have the command that is triggering your script. In case it's something nested, you may want to recurse or similar.
To do this without modifying the script, you'll need a watcher script/program that checks for access to the script file or calls ps every so often. However, if you have that kind of access, just modifying the script is probably easier.
Edit: Apparently the commands to get the parent pid and command for it, without repeatedly calling ps, look something like:
ps -p $$ -o ppid=
cat /proc/<pid>/cmdline
(from jweyrich's answer here)
Grep for it:
grep -lr yourscript /etc /opt/anotherlikleydir
failing that, search the whole system : grep -lr yourscript /
Edit:
failing that, search in binaries too: grep -lar yourscript /
failing that, the script is either executed by a logged in user or a scripted remote login... if that's the case, try peachykeen's approach and edit the script... and why not dump a ps axf to a log too.

How to restart background php process? (how to get pid)

I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.

Bind application to several Linux terminals

I have an application which spawns several processes. Is it possible to redirect the output of the children to another hidden terminal so that it does not mix with the parent output and give the ability to the end user to unhide the terminal when needed?
Thanks.
The quick and dirty way to do this is to redirect the child process' output to a (temporary) file.
A terminal tracking that file can then be started using a command like
xterm -e tail -f /tmp/child1.out
This terminal can be closed and opened when needed.
If you'd rather not store the output in a file, you can use a fifo (see mkfifo(1)), but then you lose the ability to see the past output, since a fifo doesn't store data.
from your terminal, run:
touch proc1.log
xterm -e tail -f proc1.log
topuch proc2.log
xterm -e tail -f proc2.log
/run/proc/1.sh >> proc1.log
/run/proc/2.sh >> proc2.log
now you have 2 terminals following the output of the spawned processes
Screen can do this. You can start a detached screen with the new program.
Something like:
screen -d -m -S my-emacs-session emacs foo.c

Resources