What process does a cron job run under? - linux

On a Raspberry Pi 2 with Raspbian, I discovered that I can use crontab -e and then add a line like #reboot sudo /root/.nvm/v0.10.26/bin/node /root/tweetmonkey-raspi & to the table to start a node process on boot.
I can't figure out how to quickly kill that process. I don't see it in ps -e. What process is that running under?

Cron jobs are started by cron or crond, which will spawn sh to run your command. However, your command forked to run in the background, and then finished executing, so the node process is reparented to the root process, init.

Related

Solution for running scheduled cron job within a Docker container?

I have a Docker container in which I have my Python tools installed, including my Luigi pipeline interface. I would like to run a shell script which kicks off my Luigi pipeline on a weekly basis using cron.
I have tried high and low to get cron to work within a Docker container. I cannot, for the life of me, get my crontab -e file to run.
In my file I have:
0 0 * * Sun /data/myscript.sh
followed by a new line. Cron is running in the background - ps aux | grep cron shows /usr/sbin/cron is running. Furthermore, in my /var/log/syslog file, I have:
/USR/SBIN/CRON[2037]: (root) CMD (/data/myscript.sh)
I've also tried using 0 0 * * Sun . /root/.bashrc ; sh /data/myscript.sh
However, my script does not run (when I run my script manually using bash myscript.sh, I get the expected results).
Suggestions?
Scheduled tasks won't run inside of a normal container since there is no scheduler running. The only active task will be that which you have elected to run via the CMD keyword or the Entrypoint.
In order to execute schedule tasks, it's more prudent to utilize the host scheduler and docker exec commands:
docker exec <container> <command>
docker exec <container> /data/myscript.sh
So you would end up with a cron on your host something like :
(Crontab Style)
0 * * * * root docker exec mycontainer /data/myscript.sh
If you have a cluster, you would have to query the cluster first to locate the container, or even have a script do it for you.
A container is meant to only one run main process. You either need to run crond as the main process for a container, or ensure that crond is running alongside your main process. This kind of breaks the contracts / point of containers, but sometimes it's easier to set it up this way. Instructions below:
My Dockerfile has the following ENTYPOINT:
ENTRYPOINT ["/startup.sh"]
And then within startup.sh I do a couple of things to spin up the container, but most importantly before executing the last command, I have this:
crond
exec start_my_service
crond starts the daemon that executes the crons, and start_my_service then becomes the primary process for my container.

Running process nohup

There is a benchmarking process, which should be run on a system. It takes maybe a day, so I would like to run it nohup. I use this command:
nohup bash ./run_terasort.sh > terasort.out 2>&1 &
After that I can see with PID in jobs -l output, but after closing PuTTy it stops(as I can see, when I login again).
This is a KVM virtualized machine.
You are using nohup right from what I know. But you have an issue detecting the process.
jobs -l only give the processes of current session. Rather try the below to display the process started in your initial session:
ps -eafww|grep run_terasort.sh|grep -v grep

root user of linux spanning lots of processes of python script uncontrollably

I wrote a python script to work with a message queue and the script was launched by crontab. I removed from crontab but the root user of my linux system keeps launching it every 9 minutes.
I've rebooted system and restarted cron but this script keeps getting executed.
Any idea how to keep it from happening?
If you start a cron, service does not stop even if you delete the file in which you have specified the cron.
This link should help:
https://askubuntu.com/questions/313033/how-can-i-see-stop-current-running-crontab-tasks
Also, you can also kill your cron by looking its PId, using: ps -e | grep cron-name, then kill -9 PId

jobs find nothing after nohup a script

I made a python script running on background:
nohup python app.py &
then close the terminal, a few days later, I want to see this job, so I run
jobs
there list no jobs, but I'm sure the script app.py is still running.
jobs will only give you a list of process running under the session group of the bash shell. nohup processes run in their own session group. There are a number of simple commands you can run to check if your nohup'd process is still running, the simplest being lsof | grep nohup (this command may take a few seconds to run)

Stopping process in /etc/inittab kills spawned process. Doesn't happen in rc.local

I'm trying to execute a firmware upgrade while my programming is running in inittab. My program will run 2 commands. One to extract the installer script from the tarball and the other to execute the installer script. In my code I'm using the system() function call. These are the 2 command strings below,
system ( "tar zvxf tarball.tar.gz -C / installer.sh 2>&1" );
system( "nohup installer.sh tarball >/dev/null 2>&1 &" );
The installer script requires the tarball to be an argument. I've tried using sudo but i still have the same problem. I've tried nohup with no success. The installer script has to kill my program when doing the firmware upgrade but the installer script will stay alive.
If my program is run from the command line or rc.local, on my target device, my upgrade works fine, i.e. when my program is killed my installer script continues.
But I need to run my program from /etc/inittab so it can respawn if it dies. To stop my program in inittab the installer script will hash it out and execute "telinit q". This is where my program dies (but thats what I want it to do), but it also kills my installer script.
Does anyone know why this is happening and what can I do to solve it?
Thanks in advance.
My guess what happens here is that init is sending the SIGTERM/SIGKILL not only to the process but to the whole process group. It does this to ensure that all children of a process are properly cleaned up. When your program calls system(), it will internally do a fork()/exec(). This newly forked process is in the same process group as you program so it also gets killed.
You could try to run your installer script in a new session by doing a
system( "setsid nohup installer.sh tarball >/dev/null 2>&1 &" );
If your system doesn't provide the setsid commandline utility you can simply write your own. setsid is just a small wrapper around the setsid() system call.

Resources