How to distinguish between two running Linux scripts with the same name? - linux

I use SSH to connect to Linux, maybe run a Linux script multiple times, and use nohup to suspend these processes, and then close the SSH connection. After the next SSH connection to Linux, how can I distinguish between different scripts and get different PIDs?
This Linux script will always print content on the screen. I use Python's paramiko library, SSH to Linux, run the script and use nohup to suspend the process and redirect the output to the file. This process may be multiple times. How to distinguish the starting process, find its PID and kill it. It is best not to modify the Linux script because the script is not written by me.
I use the script name to find the process number, get a lot of PIDs, I can't distinguish them.

You could parse the output of ps -eo pid,lstart,cmd which shows the process id, start time and path, e.g.:
PID STARTED CMD
1 Mon Jun 19 21:31:08 2017 /sbin/init
2 Mon Jun 19 21:31:08 2017 [kthreadd]
3 Mon Jun 19 21:31:08 2017 [ksoftirqd/0]
== Edit ==
Be aware that if the remote is macOS the ps command does not recognize the cmd keyword, use comm or command instead, e.g.: ps -eo pid,lstart,comm

Use ps command to check running process.
For checking only shell scripts , You an do something like this:-
ps -eaf |grep .sh
This will give you all the information about running shell scripts only, easily you can distinguish b/w running scripts.
In place of ".sh" you can give file name also, than you will get information only about that running file.

maybe change the command you run to do something like:
nohup command.sh &
echo "$! `date`" >> runlog.txt
wait
i.e. run the command in the background, append its PID to a log (you might want to include more identifying information here or use a format that's more easily machine readable), then wait for it to finish
another variant would be to run tmux (GNU screen) on the server and run commands in an existing session:
tmux new-window command
which would also let you "reconnect" to running scripts later to check output / terminate

Related

How to force a service (bash script) to read from stdin (ask me to input something) when the script is executing?

I created a service with systemd that uses wendy (inotify replacement tool) to listen in a directory and run a bash script when something changes.
however, my script relies on stdin at a certain time to read a variable. but when the service runs it skips asking me in a terminal for the input entirely and proceeds with the rest of the bash script.
i'm new to systemd and services, is there anyway I can force it to ask me for input?
this is what happened from /var/log/syslog
Oct 7 21:52:09 server wendy.sh[13062]: was added to scripts.
Oct 7 21:52:09 server wendy.sh[13062]: enter scriptname: (/home/user/scripts/blah.sh)
Oct 7 21:52:09 server wendy.sh[13062]: chmod: missing operand after +x
Oct 7 21:52:09 server wendy.sh[13062]: Try 'chmod --help' for more information.
it was supposed to ask me for a scriptname to pass into chmod.
How to accomplish this?
Thank you
Unfortunately, it is very difficult to ask for user inputs in a script running as a background task, because they have no terminal attached. I would advise you to try to find an alternative to reading inputs from stdin.
If you really want to achieve that, you could for example run a program somewhere that will listen to a UNIX socket, and your automated script could communicate to this client to ask for an input (c.f. this Stack Exchange post).
With netcat-openbsd, there is a -U option. If you don't have it, you probably have netcat-traditional installed instead; I'd suggest switching.
Example command: nc -U /var/run/socket

How to kill a script that has been executed using source?

The problem is that I cannot find the process ID of a script that has been executed using source. I am able to do so when they are launched with bash using ps -ef.
If I run a script using bash, I can figure the process ID using ps -ef | grep "test1.sh" | grep -v "grep". However, if I run the script using source, I cannot search for it and hence cannot find the process ID.
I have read the difference between the bash and source commands from this link.
This is my testing procedure:
I have 2 terminals. In one of them, I am searching for process IDs using ps -ef. In the other one, I run a script which prints 'Hello' every one second (an infinite while loop with sleep of 1 second). With bash, PID is searchable, but with source, grep doesn't get any results.
I am working on an Ubuntu 18.04.2 LTS machine
If you do not want to terminate the sourcing bash and are satisfied with the script being stopped only after a command (such as sleep) finishes, you can kill -INT the bash process.

Run a cronjob at a specific time

I would like to run a specific script at a certain time (only once!). If I run it normally like this:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ Rscript write_csv2.R
It does work. I however would like to program it in a cronjob to run at 10:50 and therefore did the following:
50 10 11 05 * Rscript ~/csv_file/write_csv.R
This does not seem to work however. Any thoughts where I go wrong? These are the details of the cron package im
using:
PID COMMAND
1015 cron
My system time also checks out:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ date
wo mei 11 10:56:46 CEST 2016
There is a special tool for running commands only once - at.
With at you can schedule a command like this:
at 09:05 am today
at> enter you commands...
Note, you'll need the atd daemon running.
Your crontab entry looks okay, however. I'd suggest checking if the cron daemon is running(exact daemon name depends on the cron package; it could be cron, crond, or vixie-cron, for instance). One way to check if the daemon is running is to use the ps command, e.g.:
$ ps -C cron -o pid,args
PID COMMAND
306 /usr/sbin/cron
Some advices.
Read more about the PATH variable. Notice that it is set differently in interactive shells (see your ~/.bashrc) and in cron or at jobs. See also this about Rscript.
Replace your command by a shell script, e.g. in ~/bin/myrscriptjob.sh
That myrscriptjob.sh file should start with #!/bin/sh
Be sure to make that shell script executable:
chmod u+x ~/bin/myrscriptjob.sh
Add some logging in your shell script, near the start; either use logger(1) or at least some date(1) command suitably redirected, or even both:
#!/bin/sh
# file myrscriptjob.sh
/bin/date +"myrscriptjob starting %c %n" > /tmp/myrscriptjob.start
/usr/bin/logger -t myrscript job starting $$
/usr/local/bin/Rscript $HOME/csv_file/write_csv.R
in the last line above, replace /usr/local/bin/Rscript by the output of which Rscript done in some interactive terminal.
Notice that you should not use ~ (but replace them with $HOME when appropriate) in shell scripts.
Finally, use at to run your script once. If you want to run it periodically in a crontab job, give the absolute path, e.g.
5 09 11 05 * $HOME/bin/myrscriptjob.sh
and check in /tmp/myrscriptjob.start and in your system log if it has started successfully.
BTW, in your myrscriptjob.sh script, you might replace the first line #!/bin/sh with #!/bin/sh -vx (then the shell is verbose about execution, and cron or at will send you some email). See dash(1), bash(1), execve(2)
Use full path (started from /) for both Rscript and write_csv2.R. Check sample command as follows
/tmp/myscript/Rscript /tmp/myfile/write_csv2.R
Ensure you have execution permission of Rscript and write permission in folder where write_csv2.R will be created(/tmp/myfile)

root user of linux spanning lots of processes of python script uncontrollably

I wrote a python script to work with a message queue and the script was launched by crontab. I removed from crontab but the root user of my linux system keeps launching it every 9 minutes.
I've rebooted system and restarted cron but this script keeps getting executed.
Any idea how to keep it from happening?
If you start a cron, service does not stop even if you delete the file in which you have specified the cron.
This link should help:
https://askubuntu.com/questions/313033/how-can-i-see-stop-current-running-crontab-tasks
Also, you can also kill your cron by looking its PId, using: ps -e | grep cron-name, then kill -9 PId

Why does ps only return one line of output in my Perl script when I call it with Nagios?

I have this running:
if (open(PS_ELF, "/bin/ps -eLf|")) {
while (<PS_ELF>) {
if ($_ =~ m/some regex/) {
# do some stuff
}
}
}
If called locally, the loop runs just fine, once for every output line of ps -eLf
Now if the same script is called from Nagios via NRPE, PS_ELF does only contain one line (the first line output by ps).
This puzzles me; what could be the reason?
Maybe this is not limited to/caused by Nagios at all, I just included it for the sake of completeness.
I'm on SUSE Enterprise Linux 10 SP2 and perl v5.8.8.
Although this problem is very old, I experienced the exact same problem today.
So I thought I share what I found.
The problem is that processes created by the NRPE daemon (can) have a different environment than processes you execute directly in the shell as the NRPE daemon user.
I created the following bash script:
#!/bin/bash
echo `env | grep COLUMNS`
This gives me the environment variable COLUMN of the current process, which has the same environment as the parent process (the process forked by the NRPE daemon).
When I execute this script as the NRPE daemon user
$ /tmp/check_env.sh
COLUMNS=174
it gives me the value of my current shell window.
But when I execute this script via NRPE, I get:
nagios-server $ check_nrpe -H client -c check_env
COLUMNS=80
Which is why ps -eaf output is limited to 80 characters unless you use the ww parameter for unlimited width, which ignores the COLUMNS environment variable.
I changed 'ps -eLf' to 'ps -eLfww' (ww for unlimited output) and this fixed the problem even if I don't understand why there is a difference when called remotely.
It's probably more something to do with how NRPE plugins work than Perl itself.
Your plugin is working like explained here (return code + output) ?

Resources