Test a weekly cron job [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a #!/bin/bash file in cron.week directory.
Is there a way to test if it works? Can't wait 1 week
I am on Debian 6 with root

Just do what cron does, run the following as root:
run-parts -v /etc/cron.weekly
... or the next one if you receive the "Not a directory: -v" error:
run-parts /etc/cron.weekly -v
Option -v prints the script names before they are run.

A wee bit beyond the scope of your question... but here's what I do.
The "how do I test a cron job?" question is closely connected to "how do I test scripts that run in non-interactive contexts launched by other programs?" In cron, the trigger is some time condition, but lots of other *nix facilities launch scripts or script fragments in non-interactive ways, and often the conditions in which those scripts run contain something unexpected and cause breakage until the bugs are sorted out. (See also: https://stackoverflow.com/a/17805088/237059 )
A general approach to this problem is helpful to have.
One of my favorite techniques is to use a script I wrote called 'crontest'. It launches the target command inside a GNU screen session from within cron, so that you can attach with a separate terminal to see what's going on, interact with the script, even use a debugger.
To set this up, you would use "all stars" in your crontab entry, and specify crontest as the first command on the command line, e.g.:
* * * * * crontest /command/to/be/tested --param1 --param2
So now cron will run your command every minute, but crontest will ensure that only one instance runs at a time. If the command takes time to run, you can do a "screen -x" to attach and watch it run. If the command is a script, you can put a "read" command at the top to make it stop and wait for the screen attachment to complete (hit enter after attaching)
If your command is a bash script, you can do this instead:
* * * * * crontest --bashdb /command/to/be/tested --param1 --param2
Now, if you attach with "screen -x", you'll be facing an interactive bashdb session, and you can step through the code, examine variables, etc.
#!/bin/bash
# crontest
# See https://github.com/Stabledog/crontest for canonical source.
# Test wrapper for cron tasks. The suggested use is:
#
# 1. When adding your cron job, use all 5 stars to make it run every minute
# 2. Wrap the command in crontest
#
#
# Example:
#
# $ crontab -e
# * * * * * /usr/local/bin/crontest $HOME/bin/my-new-script --myparams
#
# Now, cron will run your job every minute, but crontest will only allow one
# instance to run at a time.
#
# crontest always wraps the command in "screen -d -m" if possible, so you can
# use "screen -x" to attach and interact with the job.
#
# If --bashdb is used, the command line will be passed to bashdb. Thus you
# can attach with "screen -x" and debug the remaining command in context.
#
# NOTES:
# - crontest can be used in other contexts, it doesn't have to be a cron job.
# Any place where commands are invoked without an interactive terminal and
# may need to be debugged.
#
# - crontest writes its own stuff to /tmp/crontest.log
#
# - If GNU screen isn't available, neither is --bashdb
#
crontestLog=/tmp/crontest.log
lockfile=$(if [[ -d /var/lock ]]; then echo /var/lock/crontest.lock; else echo /tmp/crontest.lock; fi )
useBashdb=false
useScreen=$( if which screen &>/dev/null; then echo true; else echo false; fi )
innerArgs="$#"
screenBin=$(which screen 2>/dev/null)
function errExit {
echo "[-err-] $#" | tee -a $crontestLog >&2
}
function log {
echo "[-stat-] $#" >> $crontestLog
}
function parseArgs {
while [[ ! -z $1 ]]; do
case $1 in
--bashdb)
if ! $useScreen; then
errExit "--bashdb invalid in crontest because GNU screen not installed"
fi
if ! which bashdb &>/dev/null; then
errExit "--bashdb invalid in crontest: no bashdb on the PATH"
fi
useBashdb=true
;;
--)
shift
innerArgs="$#"
return 0
;;
*)
innerArgs="$#"
return 0
;;
esac
shift
done
}
if [[ -z $sourceMe ]]; then
# Lock the lockfile (no, we do not wish to follow the standard
# advice of wrapping this in a subshell!)
exec 9>$lockfile
flock -n 9 || exit 1
# Zap any old log data:
[[ -f $crontestLog ]] && rm -f $crontestLog
parseArgs "$#"
log "crontest starting at $(date)"
log "Raw command line: $#"
log "Inner args: $#"
log "screenBin: $screenBin"
log "useBashdb: $( if $useBashdb; then echo YES; else echo no; fi )"
log "useScreen: $( if $useScreen; then echo YES; else echo no; fi )"
# Were building a command line.
cmdline=""
# If screen is available, put the task inside a pseudo-terminal
# owned by screen. That allows the developer to do a "screen -x" to
# interact with the running command:
if $useScreen; then
cmdline="$screenBin -D -m "
fi
# If bashdb is installed and --bashdb is specified on the command line,
# pass the command to bashdb. This allows the developer to do a "screen -x" to
# interactively debug a bash shell script:
if $useBashdb; then
cmdline="$cmdline $(which bashdb) "
fi
# Finally, append the target command and params:
cmdline="$cmdline $innerArgs"
log "cmdline: $cmdline"
# And run the whole schlock:
$cmdline
res=$?
log "Command result: $res"
echo "[-result-] $(if [[ $res -eq 0 ]]; then echo ok; else echo fail; fi)" >> $crontestLog
# Release the lock:
9<&-
fi

After messing about with some stuff in cron which wasn't instantly compatible I found that the following approach was nice for debugging:
crontab -e
* * * * * /path/to/prog var1 var2 &>>/tmp/cron_debug_log.log
This will run the task once a minute and you can simply look in the /tmp/cron_debug_log.log file to figure out what is going on.
It is not exactly the "fire job" you might be looking for, but this helped me a lot when debugging a script that didn't work in cron at first.

I'd use a lock file and then set the cron job to run every minute. (use crontab -e and * * * * * /path/to/job) That way you can just keep editing the files and each minute they'll be tested out. Additionally, you can stop the cronjob by just touching the lock file.
#!/bin/sh
if [ -e /tmp/cronlock ]
then
echo "cronjob locked"
exit 1
fi
touch /tmp/cronlock
<...do your regular cron here ....>
rm -f /tmp/cronlock

What about putting it into cron.hourly, waiting until the next run of hourly cron jobs, then removing it? That would run it once within an hour, and in the cron environment. You can also run ./your_script, but that won't have the same environment as under cron.

Aside from that you can also use:
http://pypi.python.org/pypi/cronwrap
to wrap up your cron to send you an email upon success or failure.

None of these answers fit my specific situation, which was that I wanted to run one specific cron job, just once, and run it immediately.
I'm on a Ubuntu server, and I use cPanel to setup my cron jobs.
I simply wrote down my current settings, and then edited them to be one minute from now. When I fixed another bug, I just edited it again to one minute from now. And when I was all done, I just reset the settings back to how they were before.
Example: It's 4:34pm right now, so I put 35 16 * * *, for it to run at 16:35.
It worked like a charm, and the most I ever had to wait was a little less than one minute.
I thought this was a better option than some of the other answers because I didn't want to run all of my weekly crons, and I didn't want the job to run every minute. It takes me a few minutes to fix whatever the issues were before I'm ready to test it again. Hopefully this helps someone.

The solution I am using is as follows:
Edit crontab(use command :crontab -e) to run the job as frequently
as needed (every 1 minute or 5 minutes)
Modify the shell script which should be executed using cron to prints the output into some file (e.g: echo "Working fine" >>
output.txt)
Check the output.txt file using the command : tail -f output.txt, which will print the latest additions into this file, and thus you can track the execution of the script

I normally test by running the job i created like this:
It is easier to use two terminals to do this.
run job:
#./jobname.sh
go to:
#/var/log and run
run the following:
#tailf /var/log/cron
This allows me to see the cron logs update in real time. You can also review the log after you run it, I prefer watching in real time.
Here is an example of a simple cron job. Running a yum update...
#!/bin/bash
YUM=/usr/bin/yum
$YUM -y -R 120 -d 0 -e 0 update yum
$YUM -y -R 10 -e 0 -d 0 update
Here is the breakdown:
First command will update yum itself and next will apply system updates.
-R 120 : Sets the maximum amount of time yum will wait before performing a command
-e 0 : Sets the error level to 0 (range 0 - 10). 0 means print only critical errors about which you must be told.
-d 0 : Sets the debugging level to 0 - turns up or down the amount of things that are printed. (range: 0 - 10).
-y : Assume yes; assume that the answer to any question which would be asked is yes
After I built the cron job I ran the below command to make my job executable.
#chmod +x /etc/cron.daily/jobname.sh
Hope this helps,
Dorlack

sudo run-parts --test /var/spool/cron/crontabs/
files in that crontabs/ directory needs to be executable by owner - octal 700
source: man cron and NNRooth's

I'm using Webmin because its a productivity gem for someone who finds command line administration a bit daunting and impenetrable.
There is a "Save and Run Now" button in the "System > Scheduled Cron Jobs > Edit Cron Job" web interface.
It displays the output of the command and is exactly what I needed.

Related

Keep a scheduled cron job from repeating on failure

I have a cron job that runs every minute that involves logging into a database. Our organization mandates we change our passwords every 6 months - and the password change process can be pretty laggy - sometimes taking up to 10 minutes. However, during the latency period between when the password is changed via our tool (and goes into effect) and the change can be made in hard coded instances on our Linux box, this job will incur several failed login attempts to the database. Once it gets past a certain number, the account is locked - so all our subsequent jobs fail until we can get in touch with the DBA to unlock the account.
All the questionable practices demonstrated here notwithstanding (hard coded passwords, etc.) - I'm looking for a means for the cron job to essentially withdraw itself from the schedule if/when the login fails. Yes, doing so manually prior to the password change is possible - but not everyone going through this process is terribly linux/cron/vi literate - hence my search for some form of silver bullet that might avoid this if possible.
Any suggestions much appreciated.
To prevent a cron job from failing more than once, you need a semaphore file. This solution is similar to the comment by tvm. Create the semaphore file outside of the cron job like so:
touch is_ok.txt
It needs to be recreated after every failure to enable the cron job to run. Then wrap your command like so:
if [[ -e is_ok.txt ]] && command ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi
Example (using grep foo bar as your cron job command):
# Enable `grep foo bar` to run:
touch is_ok.txt
# Create the input for `grep foo bar` to run successfully:
echo foo > bar
# `grep foo bar` runs successfully repeatedly:
if [[ -e is_ok.txt ]] && grep foo bar ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi
# Make `grep foo bar` fail:
rm bar
# `grep foo bar` runs once, then never runs until you do `touch is_ok.txt`:
if [[ -e is_ok.txt ]] && grep foo bar ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi
Depending on your box, OS, shell, etc ... this might and might not work.
For bash for instance, you might want to change the command run by cron to something like this :
*/1 * * * * Command || exit
Which basically means : Every minute, run this command OR exit.
so, if the command fails, it will exit. but that will still be logged as a failed attempt to login.

Is it possible to auto reboot for 5 loops through mint?

I am currently using the following command to run reboot
sudo shutdown -r now
however, I would need to run it for 5 loops before and after executing some other programs. Was wondering if it is possible to do it in MINT environment?
First a disclaimer: I haven't tried this because I don't want to reboot my machine right now...
Anyway, the idea is to make a script that can track it's iteration progress to a file as #david-c-rankin suggested. This bash script could look like this (I did test this):
#!/bin/sh
ITERATIONS="5"
TRACKING_FILE="/path/to/bootloop.txt"
touch "$TRACKING_FILE"
N=$(cat "$TRACKING_FILE" | wc -c)
if [ "$N" -lt "$ITERATIONS" ]; then
printf "." >> "$TRACKING_FILE"
echo "rebooting (iteration $N)"
# TODO: this is where you put the reboot command
# and anything you want to run before rebooting each time
else
rm "$TRACKING_FILE"
# TODO: other commands to resume anything required
fi
Then add a call to this script somewhere where it will be run on boot. eg. cron (#reboot) or systemd. Don't forget to remove it from a startup/boot command when you're finished or next time you reboot, it will reboot N times.
Not sure exactly how you are planning on using it, but the general workflow would look like:
save script to /path/to/reboot_five_times.sh
add script to run on boot (cron, etc.)
do stuff (manually or in a script)
call the script
computer reboots 5 times
anything in the second TODO section of the script is then run
go back to step 3, or if finished remove from cron/systemd so it won't reboot when you don't want it to.
First create a text document wherever you want,I created one on Desktop,
Then use this file as a physical counter and write a daemon file to run things at startup
For example:
#!/bin/sh
var=$(cat a.txt)
echo "$var"
if [ "$var" != 5 ]
then
var=$((var+1))
echo "$var" > a.txt
echo "restart here"
sudo shutdown -r now
else
echo "stop restart"
echo 0 > a.txt
fi
Hope this helps
I found a way to create a file at startup for my reboot script. I incorporated it with the answers provided by swalladge and also shubh. Thank you so much!
#!/bin/bash
#testing making a startup application
echo "
[Desktop Entry]
Type=Application
Exec=notify-send success
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name[en_CA]=This is a Test
Name=This is a Test
Comment[en_CA]=
Comment=" > ~/.config/autostart/test.desktop
I create a /etc/rc.local file to execute user3089519's script, and this works for my case. (And for bootloop.txt, I put it here: /usr/local/share/bootloop.txt )
First: sudo nano /etc/rc.local
Then edit this:
#!/bin/bash
#TODO: things you want to execute when system boot.
/path/to/reboot_five_times.sh
exit 0
Then it works.
Don't forget edit /etc/rc.local and remove /path/to/reboot_five_times.sh when you done reboot cycling.

Why doesn't $(which node) work inside a cron job executing a BASH shell?

According to this answer the reason cron doesn't have access to environment variables normally associated with a BASH terminal is because it doesn't source the users .bashrc file.
I have a script which does source my .bashrc file but it still fails to find my currently in use version of Node (meaning I need to list the full directory and change it with every update!).
Script:
#!/bin/bash
source $HOME/.bashrc # <-- even after sourcing .bashrc, '$(which node)' returns nothing
NODE="$(which node)" # <-- output is blank in cron job
PROCESS="/home/grayedfox/.nvm/versions/node/v8.9.4/bin/node /home/grayedfox/github/blockscrape/main.js"
LOGFILE="/tmp/log.out"
export BLOCKSCRAPECLI="/opt/litecoin-0.14.2/bin/litecoin-cli"
if pgrep -f "$PROCESS" > /dev/null; then
echo "Blockscrape is doing it's thing - moving on..." >> $LOGFILE
else
echo "Blockscrape not running! Starting again..." >> $LOGFILE
echo "Process: $PROCESS" >> $LOGFILE
echo "Node: $NODE" >> $LOGFILE # <-- outputs only 'Node: ' in log file
$PROCESS >> $LOGFILE
fi
Crontab:
# make default shell BASH
SHELL=/bin/bash
# reboots litecoin daemon if it dies
#reboot /opt/litecoin-0.14.2/bin/litecoind
# check every minute to see if block scrape running and restart it if not
* * * * * /home/grayedfox/github/blockscrape/restartBlockscrape.sh
I can confirm that node (and doing "which node") works fine in my terminal.
Thanks for the help!
As discussed in this answer - thethis answer problem was that in Ubuntu 16.04 (and many prior versions) the standard .bashrc file quits if it's not being run interactively.
Commenting out the below code from my bash file fixed it:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac

Cron not executing the shell script + Linux [duplicate]

I have a script that checks if the PPTP VPN is running, and if not it reconnects the PPTP VPN. When I run the script manually it executes fine, but when I make a cron job, it's not running.
* * * * * /bin/bash /var/scripts/vpn-check.sh
Here is the script:
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
finally i found a solution ... instead of entering the cronjob with
crontab -e
i needed to edit the crontab file directly
nano /etc/crontab
adding e.g. something like
*/5 * * * * root /bin/bash /var/scripts/vpn-check.sh
and its fine now!
Thank you all for your help ... hope my solution will help other people as well.
After a long time getting errors, I just did this:
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
Where test.sh contains:
#!/bin/bash
/usr/bin/python3 /home/joaovitordeon/Documentos/test.py;
In my case, the issue was that the script wasn't marked as executable. To make sure it is, run the following command:
chmod +x your_script.sh
If you're positive the script runs outside of cron, execute
printf "SHELL=$SHELL\nPATH=$PATH\n* * * * * /bin/bash /var/scripts/vpn-check.sh\n"
Do crontab -e for whichever crontab you're using and replace it with output of the above command. This should mirror most of your environment in case there is some missing path issue or something else. Also check logs for any errors it's getting.
Though it definitly looks like the script has an error or you messed something up when copying it here
sed: -e expression #1, char 44: unterminated `s' command
./bad.sh: 5: ./bad.sh: [[: not found
Simple alternate script
#!/bin/bash
if [[ $(ping -c3 192.168.17.27) == *"0 received"* ]]; then
/usr/sbin/pppd call home
fi
Your script can be corrected and simplified like this:
#!/bin/sh
log=/tmp/vpn-check.log
{ date; ping -c3 192.168.17.27; } > $log
if grep -q '0 received' $log; then
/usr/sbin/pppd call home >>$log 2>&1
fi
Through our discussion in comments we confirmed that the script itself works, but pppd doesn't, when running from cron. This is because something must be different in an interactive shell like your terminal window, and in cron. This kind of problem is very common by the way.
The first thing to do is try to remember what configuration is necessary for pppd. I don't use it so I don't know. Maybe you need to set some environment variables? In which case most probably you set something in a startup file, like .bashrc, which is usually not used in a non-interactive shell, and would explain why pppd doesn't work.
The second thing is to check the logs of pppd. If you cannot find the logs easily, look into its man page, and it's configuration files, and try to find the logs, or how to make it log. Based on the logs, you should be able to find what is missing when running in cron, and resolve your problem.
Was having a similar problem that was resolved when a sh was put before the command in crontab
This did not work :
#reboot ~myhome/mycommand >/tmp/logfile 2>&1
This did :
#reboot sh ~myhome/mycommand >/tmp/logfile 2>&1
my case:
crontab -e
then adding the line:
* * * * * ( cd /directory/of/script/ && /bin/sh /directory/of/script/scriptItself.sh )
in fact, if I added "root" as per the user, it thought "root" was a command, and it didn't work.
As a complement of other's answers, didn't you forget the username in your crontab script ?
Try this :
* * * * * root /bin/bash /var/scripts/vpn-check.sh
EDIT
Here is a patch of your code
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult=`echo "$result" | /bin/sed 's/^\(.................................\).*$/\1/'`
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
In my case, it could be solved by using this:
* * * * * root ( cd /directory/of/script/ && /directory/of/script/scriptItself.sh )
I used some ./folder/-references in the script, which didn't work.
The problem statement is script is getting executed when run manually in the shell but when run through cron, it gives "java: command not found" error -
Please try below 2 options and it should fix the issue -
Ensure the script is executable .If it's not, execute below -
chmod a+x your_script_name.sh
The cron job doesn’t run with the same user with which you are executing the script manually - so it doesn't have access to the same $PATH variable as your user which means it can't locate the Java executable to execute the commands in the script. We should first fetch the value of PATH variable as below and then set it(export) in the script -
echo $PATH can be used to fetch the value of PATH variable.
and your script can be modified as below - Please see second line starting with export
#!/bin/sh
export PATH=<provide the value of echo $PATH>
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
First of all, check if cron service is running. You know the first question of the IT helpdesk: "Is the PC plugged in?".
For me, this was happening because the cronjob was executing from /root directory but my shell script (a script to pull the latest code from GitHub and run the tests) were in a different directory. So, I had to edit my script to have a cd to my scripts folder. My debug steps were
Verified that my script run independent of cron job
Checked /var/log/cron to see if the cron jobs are running. Verified that the job is running at the intended time
Added an echo command to the script to log the start and end times to a file. Verified that those were getting logged but not the actual commands
Logged the result of pwd to the same file and saw that the commands were getting executed from /root
Tried adding a cd to my script directory as the first line in the script. When the cron job kicked off this time, the script got executed just like in step 1.
it was timezone in my case. I scheduled cron with my local time but server has different timezone and it does not run at all. so make sure your server has same time by date cmd
first run command env > env.tmp
then run cat env.tmp
copy PATH=.................. Full line and paste into crontab -e, line before your cronjobs.
try this
/home/your site folder name/public_html/gistfile1.sh
set cron path like above

switch desktops after 5 minutes idle (xprintidle): crontab or daemon?

on my raspberry pi (raspbian running) I would like to have the current desktop switched to desktop n#0 after 5 minutes of idle system (no mouse or keyboard action), through wmctrl -s 0 and xprintidle for idle time checking.
Please keep in mind I'm no expert...
I tried 2 different ways, none of them working and I was wondering which one of them is the best way to do have the job done:
bash script and crontab
I wrote a simple script which checks if xprintidle is greater than a previously set $IDLE_TIME, than it switches desktops (saved in /usr/local/bin/switchDesktop0OnIdle):
#!/bin/bash
# 5 minutes in ms
IDLE_TIME=$((5*60*1000))
# Sequence to execute when timeout triggers.
trigger_cmd() {
wmctrl -s 0
}
sleep_time=$IDLE_TIME
triggered=false
while sleep $(((sleep_time+999)/1000)); do
idle=$(xprintidle)
if [ $idle -ge $IDLE_TIME ]; then
if ! $triggered; then
trigger_cmd
triggered=true
sleep_time=$IDLE_TIME
fi
else
triggered=false
# Give 100 ms buffer to avoid frantic loops shortly before triggers.
sleep_time=$((IDLE_TIME-idle+100))
fi
done
script itself works.
Then I added it to crontab (crontab -e) for have it run every 6 minutes
*/6 * * * * * sudo /usr/local/bin/switchDesktop0OnIdle
not sure sudo is necessary or not.
Anyway It doesn't work: googling around I understood that crontab runs in its own environment with its own variables. Even though I don't remember how to access this environment (oops) I do remember that I get these 2 errors running the script in it (which correctly works in "normal" shell)
could not open display (is it important ?)
bla bla -ge error, unary operator expected or similar: basically xprintidle doesn't work in this environment a gives back an empty value
What am I missing ?
infinite-while bash script running as daemon
second method I tried to set up a script with an internal infinite-while checking if xprintidle is greater then 5 minutes. In this case desktop is switched (less elegant?). Saved also in /usr/local/bin/switchDesktop0OnIdle
#!/bin/bash
triggered=false
while :
do
if [ `xprintidle` -ge 300000 ]; then
if [ triggered == false ]
wmctrl -s 0
triggered = true
fi
else
triggered = false
fi
fi
done
again the script itself works.
I tried to create a daemon in /etc/init.d/switchDesktop0OnIdle (really not an expert here, modified an existing one)
#! /bin/sh
# /etc/init.d/switchDesktop0OnIdle
### BEGIN INIT INFO
# Provides: switchDesktop0OnIdle
# Required-Start: $all
# Required-Stop: $all
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description:
# Description:
### END INIT INFO
DAEMON=/usr/local/bin/switchDesktop0OnIdle
NAME=switchDesktop0OnIdle
test -x $DAEMON || exit 0
case "$1" in
start)
echo -n "Starting daemon: "
start-stop-daemon --start --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
stop)
echo -n "Shutting down daemon:"
start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
restart)
echo -n "Restarting daemon: "
start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON
start-stop-daemon --start --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit 0
I set it up
sudo update-rc.d switchDesktop0OnIdle defaults
and
sudo service switchDesktop0OnIdle start
(necessary?)
...and nothing happens...
also I don't find the process with ps -ef | grep switchDesktop0OnIdle but it seems running with sudo service switchDesktop0OnIdle status
can anyone please help?
thank you
Giuseppe
As you suspected, the issue is that when you run your scripts from init or from cron, they are not running within the GUI environment you want them to control. In principle, a Linux system can have multiple X environments running. When you are using one, there are environment variables that direct the executables you are using to the environment you are in.
There are two parts to the solution: your scripts have to know which environment they are acting on, and they have to have authorization to interact with that environment.
You almost certainly are using a DISPLAY value of ":0", so export DISPLAY=:0 at the beginning of your script will handle the first part of the problem. (It might be ":0.0", which is effectively equivalent).
Authorization is a bit more complex. X can be set up to do authorization in different ways, but the most common is to have a file .Xauthority in your home directory which contains a token that is checked by the X server. If you install a script in your own crontab, it will run under your own user id (you probabl shouldn't use sudo), so it will read the right .Xauthority file. If you run from the root crontab, or from an init script, it will run as the root user, so it will have access to everything but will still need to know where to take the token from. I think that adding export XAUTHORITY=/home/joe/.Xauthority to the script will work. (Assuming your user id is joe.)

Resources