Scripts launched from crontab don't work as if I run as self - linux

I have a script on my home theater PC running Ubuntu 22. The purpose of this script is to check
When was mouse/keyboard activity last seen?
Is there active audio playing (music or video)
and then it determines if the htpc is idle or not, and relays this information back to my home automation server.
I have a "helper" script that fires this up in a tmux session (mainly so I can attach and debug since I have it dump output to the terminal) at boot.
If I run this script as the main user (htpc), then it works great. When it launches from the crontab, pulse audio states that there is no pulse audio daemon running. Checking ps shows that both run as htpc.
Output of running directly from a terminal (I'm SSH'd in)
htpc#htpc:~/Documents$ ps aux | grep -i idle
htpc 8397 0.0 0.0 11308 3736 ? Ss 00:03 0:00 tmux new-session -d -s idle-script exec /home/htpc/Documents/report_idle_to_hass.sh
htpc 8398 0.0 0.0 10100 4012 pts/2 Ss+ 00:03 0:00 /bin/bash /home/htpc/Documents/report_idle_to_hass.sh
htpc 8455 0.0 0.0 9208 2180 pts/3 S+ 00:03 0:00 grep --color=auto -i idle
Output of running from a crontab job:
htpc#htpc:~/Documents$ ps aux | grep -i idle
htpc 6720 0.0 0.0 11304 3604 ? Ss 23:57 0:00 tmux new-session -d -s idle-script exec /home/htpc/Documents/report_idle_to_hass.sh
htpc 6721 0.0 0.0 10100 3988 pts/2 Ss+ 23:57 0:00 /bin/bash /home/htpc/Documents/report_idle_to_hass.sh
htpc 6748 0.0 0.0 9208 2416 pts/3 S+ 23:57 0:00 grep --color=auto -i idle
Visually, I don't see why one works and another doesn't.
tmux output of a manually started (me via ssh) instance:
Counter overflowed: No
[][]
Counter overflowed: No
[][]
Note that the [][] simply is a response from my home automation server. It's an empty response because the state did not change. It will populate with previous state information if there was a change.
tmux output of a crontab-started instance:
Counter overflowed: Yes
[][]
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
Counter overflowed: Yes
[][]
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
Crontab entry (running crontab -e):
#reboot exec /home/htpc/Documents/start_idle_report.sh
I'll give the script giving the error first, the only part that doesn't work is the pacmd command. It does successfully grab the window name and keyboard/mouse last input time. Only pacmd has issues.
check_idle.sh:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export DISPLAY=:0.0
# Set the timeout value to consider device idle
time_to_be_idle_ms=60000
audio_check="`exec pacmd list-sink-inputs | grep \"state: RUNNING\"`"
idle_time="`exec xprintidle`"
current_app="`exec xdotool getwindowfocus getwindowname`"
if [ -z "$audio_check" ]
then
if [ "$idle_time" -gt "$time_to_be_idle_ms" ]
then
if [[ "$current_app" == *"Frigate"* ]]
then
# No audio, idle time, but cameras are in focus
echo "No"
else
# No audio, idle time, and cameras aren't in focus
echo "Yes"
fi
else
# No audio playing, but idle time not met
echo "No"
fi
else
# Playing audio
echo "No"
fi
start_idle_report.sh:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Kick off the report script
if [[ "`ps aux | grep -i report_idle_to_hass | wc -l`" -eq "1" ]];
then
tmux new-session -d -s "idle-script" "exec /home/htpc/Documents/report_idle_to_hass.sh"
fi
report_idle_to_hass.sh (some info redacted):
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
submit_state() {
# This function sends the desired output to my server
}
submit_state "No"
last_state="No"
counter=0
while [ true ]
do
idle_check="`/home/htpc/Documents/check_idle.sh`"
if [[ "$idle_check" != "$last_state" ]]
then
echo "States changed: $idle_check"
submit_state "$idle_check"
last_state="$idle_check"
counter=0
else
counter=$((counter+1))
fi
if [ "$counter" -gt 10 ]
then
echo "Counter overflowed: $last_state"
submit_state "$last_state"
counter=0
fi
sleep 2
done
I looked up how pulse figures out what to connect to: https://www.freedesktop.org/wiki/Software/PulseAudio/FAQ/
It lists several environment variables. When I look with printenv, there are no display related or pulse related variables set.
I do not have a ~/.pulse folder to contain the config, and the default pulse file in /etc/pulse is empty (nothing but comments in there).
How is it finding a pulse server when I run it but not in crontab?
I thought maybe it was a path issue (even though I define the paths in the script)
htpc#htpc:~/Documents$ which pacmd
/usr/bin/pacmd
I have tried it with the 'exec' removed and added. Tried 'exec' when I found another question on stack overflow that suggested using exec because it was firing off the script with a 'sh -c' instead, and I figured this might be the problem, but it did not fix it.
I expected the behavior to be identical to me sshing in and simply typing the tmux command myself, but for some reason when started via a crontab job it does not work.
Feels a lot like the commands are being executed as a different user, or some variable issue.

If your issue is that it is running as cron and not as you, then you need to place your cron entries in a user-specific crontab under /var/spool/cron/crontabs (Ubuntu MATE 20.04), not in the main crontab.

Related

Why is Crontab not starting my tcpdump bash script capture?

I have created a simple bash script to start capturing traffic from all interfaces I have in my Linux machine (ubuntu 22), but this script should stop capturing traffic 2 hours after the machine has reboot. Below is my bash script
#!/bin/bash
cd /home/user/
tcpdump -U -i any -s 65535 -w output.pcap &
pid=$(ps -e | pgrep tcpdump)
echo $pid
sleep 7200
kill -2 $pid
The script works fine if I run it, but I need to have it running after every reboot.
Whenever I run the script, it works without problem
user#linux:~$ sudo ./startup.sh
[sudo] password for user:
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
1202
35 packets captured
35 packets received by filter
0 packets dropped by kernel
but when I set it in the crontab as
#reboot /home/user/startup.sh
it does not start at reboot. I used ps -e | pgrep tcpdump to make sure if the script is running but there is not an output, it seems that it is not starting the script after the reboot. I don't know if I need to have root permissions for that. Also, I checked the file permission, and it has
-rwxrwxr-x 1 user user 142 Nov 4 10:11 startup.sh
Any suggestion on why it is not starting the script at the reboot?
Suggesting to update your script:
#!/bin/bash
source /home/user/.bash_profile
cd /home/user/
tcpdump -U -i any -s 65535 -w output.pcap &
pid=$(pgrep -f tcpdump)
echo $pid
sleep 7200
kill -2 $pid
Suggesting to inspect crontab execution log in /var/log/cron
The problem here was that even though the user has root permission, if an script needs to be run in crontab at #reboot, crontab needs to be modified by root. That was the only way I found to run the script. As long as I am running tcpdump, this will require root permission but crontab will not start it at the boot up if it is not modified by sudo.

How to create a cron job in server for starting celery app?

the server where I have hosted my app usually restarts due to maintenance and when it does a function that I kept open in the background stops and I have to manually turn it on.
Here are the commands that I do in ssh
ssh -p19199 -i <my ssh key file name> <my username>#server.net
source /home/myapp/virtualenv/app/3.8/bin/activate
cd /home/myapp/app
celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach
I need to start this celery app whenever there is no 'celery' function in the command ps axuww. If it is running already then it will show:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
myapp 8792 0.1 0.2 1435172 82252 ? Sl Jun27 1:27 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8898 0.0 0.2 1115340 92420 ? S Jun27 0:32 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8899 0.0 0.2 1098900 76028 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8900 0.0 0.2 1098904 76028 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8901 0.0 0.2 1098908 76040 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 28216 0.0 0.0 10060 2928 pts/1 Ss 15:57 0:00 -bash
myapp 28345 0.0 0.0 49964 3444 pts/1 R+ 15:57 0:00 ps axuww
I need the cron job to check every 15 minutes.
You have everything figured out already, just put it inside a IF block in a shell (Bash) script, save it wherever you feel like (eg, $HOME). After the script, we'll set Cron.
I'm gonna call this script "start_celery.sh":
#!/usr/bin/env bash -ue
#
# Start celery only if it is not running yet
#
if [[ `ps auxww | grep "[c]elery"` ]]; then
source /home/myapp/virtualenv/app/3.8/bin/activate
cd /home/myapp/app
celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach
fi
Remember to make it executable!
$ chmod +x start_celery.sh
A few notes:
the output from ps is being filtered by grep "[c]elery" because we don't one the grep line that would come out if we simply did grep "celery" (reference: https://www.baeldung.com/linux/grep-exclude-ps-results).
Using grep "celery" | grep -v "grep" would not work in our case because we want to use the lack of celery commands (exit code "1") to trigger the IF block.
setting "-ue" in the script fails the whole thing/script in case something goes wrong (this is a precaution I always use, and I live it here as a shared knowledge/hint. Even if we are not using variables (-u) I like to use it). Feel free to remove it. (reference https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html)
Now we have the script -- ~/start_celery.sh in our home directory, we just have to include it in our (user's) crontab jobs. To do that, we can use some help from some websites of our (awesome) geek community:
The first one is https://crontab.guru/ . If nothing else, it puts the cronjobs syntax in very simple, visual, straight words. They also offer a payed service in case you want to monitor your jobs.
In the same spirit of online monitoring (maybe of your interest, I live it here for the records and because it's free), you may find interesting https://cron-job.org/ .
Whatever the resources we use to figure out the crontab line we need, "every 15 minutes" is: */15 * * * * .
To set that up, we open our (user's) cronjob table (crontab) and set our job accordingly:
Open crontab jobs:
$ crontab -e
Set the (celery) job (inside crontab):
*/15 * * * * /home/myapp/start_celery.sh
This should work, but I didn't test the whole thing. Let me know how that goes.
I think what you need is a corn job running your script at startup as long as you don't experience any function crashes.
Start by opening a terminal window and run the command below. Don't forget to <add your ssh key> and <username>. This will generate the script.sh file for you and save it in $HOME
echo -e "ssh -p19199 -i <my ssh key file name> <my username>#server.net\n\nsource /home/myapp/virtualenv/app/3.8/bin/activate\n\ncd /home/myapp/app\n\ncelery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach" >> $HOME/startup.sh
Make it executable using the following command:
sudo chmod +x $HOME/startup.sh
Now create the cron job to run the script when you restart your server. In the terminal create a new cron job.
crontab -e
This will open a new editor screen for you to write the cron job. Press the letter i on your keyboard to start the insert mode and type the following
#reboot sh $HOME/startup.sh
Exit the editor and save your changes by pressing esc key then type :wq and hit Enter
Restart your server and the cron job should run the script for you.
If you want to manage the tasks that should run when a system reboots, shutdowns are alike, you should not make use of cron. These tasks are managed by the Power Management Utilities (pm-utils). Have a look at man pm-action and this question for a somewhat detailed explanation.
Obviously, as a user it is not always possible to configure these hooks for pm-util and alternative options are considered. Having a cron-job running every x minutes is an obvious solution that would allow the user to automatically start a job.
The command you would run in cron would look something like:
pgrep -f celery 2>/dev/null || /path/to/celery_startup
Where /path/to/celery_startup is a simple bash script that contains the commands to be executed (the ones mentioned in the OP).

How to supress stdout and stderr message when using pkill

I'm trying to kill some process in ubuntu 18.04 for which I am using pkill command. But I am to able to suppress Killed message for some reason.
Here is process which are running.
# ps -a
PID TTY TIME CMD
2346 pts/0 00:00:00 gunicorn
2353 pts/0 00:00:00 sh
2360 pts/0 00:00:00 gunicorn
2363 pts/0 00:00:00 gunicorn
2366 pts/0 00:00:00 ps
My attempts to kill the process and supressing logs
# 1st attempt
# pkill -9 gunicorn 2>&1 /dev/null
pkill: only one pattern can be provided
Try `pkill --help' for more information.
#2nd attempt (This killed process but got output `Killed` and have to press `enter` to get into command line)
# pkill -9 gunicorn > /dev/null
root#my-ubuntu:/# Killed
#3rd attempt(behavior similar to previous attempt)
# pkill -9 gunicorn 2> /dev/null
root#my-ubuntu:/# Killed
root#my-ubuntu:/#
What is it that I am missing?
I think you want this syntax:
pkill -9 gunicorn &>/dev/null
the &> is a somewhat newer addition in Bash ( think 4.0 ??) that is a shorthand way of redirecting both stdout and stderr.
Also, are you running pkill from the same terminal session that gunicorn was started on? I don't think pkill prints a message like "Killed" which makes me wonder if that is coming from some other process....
You might be able to suppress it by running set +m in the terminal (to disable job monitoring). To reenable, run set -m.
I found the only way to prevent output from pkill was to use the advice here:
https://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/
command 1>&- 2>&-
This closes stdout/stderr for the command.

How to get a list of programs running with nohup

I am accessing a server running CentOS (linux distribution) with an SSH connection.
Since I can't always stay logged in, I use "nohup [command] &" to run my programs.
I couldn't find how to get a list of all the programs I started using nohup.
"jobs" only works out before I log out. After that, if I log back again, the jobs command shows me nothing, but I can see in my log files that my programs are still running.
Is there a way to get a list of all the programs that I started using "nohup" ?
When I started with $ nohup storm dev-zookeper ,
METHOD1 : using jobs,
prayagupd#prayagupd:/home/vmfest# jobs -l
[1]+ 11129 Running nohup ~/bin/storm/bin/storm dev-zookeeper &
NOTE: jobs shows nohup processes only on the same terminal session where nohup was started. If you close the terminal session or try on new session it won't show the nohup processes. Prefer METHOD2
METHOD2 : using ps command.
$ ps xw
PID TTY STAT TIME COMMAND
1031 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1
10582 ? S 0:01 [kworker/0:0]
10826 ? Sl 0:18 java -server -Dstorm.options= -Dstorm.home=/root/bin/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dsto
10853 ? Ss 0:00 sshd: vmfest [priv]
TTY column with ? => nohup running programs.
Description
TTY column = the terminal associated with the process
STAT column = state of a process
S = interruptible sleep (waiting for an event to complete)
l = is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
Reference
$ man ps # then search /PROCESS STATE CODES
Instead of nohup, you should use screen. It achieves the same result - your commands are running "detached". However, you can resume screen sessions and get back into their "hidden" terminal and see recent progress inside that terminal.
screen has a lot of options. Most often I use these:
To start first screen session or to take over of most recent detached one:
screen -Rd
To detach from current session: Ctrl+ACtrl+D
You can also start multiple screens - read the docs.
If you have standart output redirect to "nohup.out" just see who use this file
lsof | grep nohup.out
You cannot exactly get a list of commands started with nohup but you can see them along with your other processes by using the command ps x. Commands started with nohup will have a question mark in the TTY column.
You can also just use the top command and your user ID will indicate the jobs running and the their times.
$ top
(this will show all running jobs)
$ top -U [user ID]
(This will show jobs that are specific for the user ID)
sudo lsof | grep nohup.out | awk '{print $2}' | sort -u | while read i; do ps -o args= $i; done
returns all processes that use the nohup.out file

Running a php script in the background via shell - script never executes on mac os x

I have a php script which is responsible for sending emails based on a queue contained in a database.
The script works when it is executed from my shell as such:
/usr/bin/php -f /folder/email.php
However, when I execute it to run in the background:
/usr/bin/php -f /folder/email.php > /dev/null &
It never completes, and the process just sits in the process queue:
clickonce: ps T
PID TT STAT TIME COMMAND
1246 s000 Ss 0:00.03 login -pf
1247 s000 S 0:00.03 -bash
1587 s000 T 0:00.05 /usr/bin/php -f /folder/email.php
1589 s000 R+ 0:00.00 ps T
So my question is how can I run this as a background process and have it actually execute? Do I need to configure my OS? Do I need to change the way I execute the command?
"T" in the "STAT" column indicates a stopped process. I would guess that your script is attempting to read input from stdin and is getting stopped because it is not the foreground process and thus is not allowed to read.
You should check if the script does indeed read something while executing.

Resources