Scrapyd not starting at boot-up - Ubuntu 14 - linux

I have Scrapyd installed on server.
I want it to start whenever my server restarts.
I created job here /etc/init/myjob.conf with following contents
description "my job"
start on startup
task
cd /var/www/scrapers && scrapyd >& /dev/null &
I also tried to put following command in crontab -e
#reboot cd /var/www/scrapers && scrapyd >& /dev/null &
Both of them did not work.
I checked cronlogs using grep CRON /var/log/syslog and here I see command ran but Scrapyd did not start
Mar 30 13:33:28 mani CRON[446]: (root) CMD (cd /var/www/scrapers && scrapyd >& /dev/null &)
As you can see that command ran as ROOT user.
If I manually run that command in terminal it works!
PS:
I changed command to
#reboot cd /var/www/scrapers && scrapyd >> /var/www/log.txt
and log file is created but its empty!

You must try to start process manually, and see what happens. The line:
cd /var/www/scrapers && scrapyd
makes sense if: 1) directory /var/www/scrapers do exist 2) scrapyd is a binary that exists in that directory, and PATH environment variable includes actual directory ("." wildcard).
Maybe you could try:
cd /var/www/scrapers && ./scrapyd
without redirect standar output to /dev/null, to see what happens. Always try manual before use cron.

I could not get my scrapyd start through /etc/rc.local or crontab so I found a work around. I am sure there will be a better way but for mean time this worked for me.
I created a python file start.py.
import os
os.system('/usr/bin/python3 /home/ubuntu/.local/bin/scrapyd > /home/ubuntu/scrapy.log &')
And simply call the python file start.py through crontab.
Add this to crintab file by cronat -e.
#reboot /usr/bin/python3 /home/ubuntu/start.py
Basically what I did is execute the scrapy through shell by calling the python filr through crontab.

Related

Autorun C++ exe with crontab on boot

I have created crontab file using the following command.
crontab -e
Then in crontab file
#reboot sleep 120 && /home/xavier/run.sh
run.sh file has
#crontab -e
SHELL=/bin/bash
PATH=/bin:/sbin/:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
./deepstream-app -c ../../../../samples/configs/deepstream-app/stluke_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_record.txt
deepstream-app program runs after two minutes. But it stops. I can see in systemmonitor.
I'm not sure passing arguments to deepstream-app is correct.
On terminal, the following two commands are used to run c++ exe file.
cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
./deepstream-app -c ../../../../samples/configs/deepstream-app/stluke_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_record.txt
So I run exactly in shell file.
What could be wrong?

Restart Opencanary from Crontab

I have a programm called opencanary running at a virtual environment at my Raspberry Pi with Ubuntu 18.04 installed. I want to restart it every 30 Minutes using crontab. For testing I set the script to run every 3 Minutes as you can see below.
When I execute the script manually it's working fine. When using crontab to run it it doesn't and I can't find out why it fails.
This is what my script looks like:
#!/bin/bash
SHELL=/bin/sh
. /home/pi/.bashrc
source /home/pi/canary-env/bin/activate && cd opencanary && opencanaryd --restart
After creating the script I added it to crontab -e:
*/3 * * * * /home/pi/restartOC.sh>>test.log
When I look at the cron.log file I can see that the script is executed:
Sep 29 08:33:01 DiskStation CRON[20880]: (pi) CMD (cd /home/pi && sh restartOC.sh>>test.log)
the test.log file stays empty.
Does someone know what I am doing wrong?
Edit 05.10.2021
At the github of opencanary I was told that I don't have to use the 'cd opencanary'. I followed the advice and edited my script:
#!/bin/bash
SHELL=/bin/sh
. /home/pi/.bashrc
source /home/pi/canary-env/bin/activate && opencanaryd --restart
The script is still working when executed manual but The Problem does still exist when running the script from cron.
I solved the problem by calling 'which opencanaryd' at the terminal
this will return the path where the opencanaryd command is located.
In my case it is /usr/local/bin/opencanaryd
With this knowledge it is possible to edit the script so cron can find the command:
#!/bin/bash
SHELL=/bin/sh
. /home/pi/.bashrc
cd /usr/local/bin/ && . opencanaryd --restart

Root Crontab say command not found in bash script

Currently I am writing a little script which should add a cronjob to the root crontable. But it seems that my root crontable stopped working. When I try to run the crontab commands in my bash scrip, I get "command not found". Also it worked for some time and stopped working yesterday. Now when I enter "sudo crontab -l" I don't get "no crontab for root" anymore. I am not sure what I did wrong. Here is my code:
#!/bin/bash
sudo crontab -l > rootcron 2> /dev/null
sudo echo "test" >> rootcron
sudo crontab rootcron
sudo rm rootcron
You didn't specify when the command is to be run. Typically you would see something like:
*/5 * * * * touch /tmp/test-cron
So basically you probably have an invalid cron file. What are the contents of the file now?

Crontab not recognising command

I have a bash script which I want to run as a cron job.
It works fine except one command.
I redirected its stderr to get the error and found out that the error it shows was the command not recognized.
It is a root crontab.
Both the current user and root execute the command successfully when I type it in the terminal.
Even the script executes the command when I run it through the terminal.
Startup script :
#!/bin/bash
sudo macchanger -r enp2s0 > /dev/null
sudo /home/deadpool/.logkeys/logger.sh > /dev/null
logger.sh :
#!/bin/bash
dat="$(date)"
echo " " >> /home/deadpool/.logkeys/globallog.log
echo $dat >> /home/deadpool/.logkeys/globallog.log
echo " " >> /home/deadpool/.logkeys/globallog.log
cat /home/deadpool/.logkeys/logfile.log >> /home/deadpool/.logkeys/globallog.log
cat /dev/null > /home/deadpool/.logkeys/logfile.log
cat /dev/null > /home/deadpool/.logkeys/error.log
logkeys --start --output /home/deadpool/.logkeys/logfile.log 2> /home/deadpool/.logkeys/error.log
error.log
/home/deadpool/.logkeys/logger.sh: line 10: logkeys: command not found
Remember cron runs with a different environment then your user account or root does and might not include the path to logkeys in its PATH. You should try the absolute path for logkeys (find it with which logkeys from your user) in your script. Additionally I recommend looking at this answer on serverfault about running scripts like they are running from cron when you need to find out why it's working for you and not in a job.

rdiff-backup bash script and cron trouble

I have this very simple bash script:
#!/opt/bin/bash
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/www/vhosts /volume1/backups/vhosts 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/etc /volume1/backups/etc 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics /volume1/homes /volume1/backups/homes 2>&1 >> /var/log/rdiff-backup.log;
cat /var/log/rdiff-backup.log | /opt/bin/nail -s "rdiff-backup log" me#email.com;
if I run the script from the command line, in this way:
nohup /path/to/my/scipt.sh &
it works fine, appending each rdiff-backup statistic report to the rdiff-backup.log and sending this file to my email address, as expected. But if I put the script in the crontab, the script make only one rdiff-backup job sending statistics via email. I cannot understand because the script doesn't work in the same way...
Any idea?
this is my cronjob entry:
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh
via crontab only the last job is executed correctly, I think because this is the only one local backup. When I execute the script from command line I use the root user, and the public key of the root user is in the /root/./ssh/authorized_keys of the remote machine. The owner of the crontab file is the root user too, I created them through "crontab -e" using the root account.
First of all you need to make sure the script used in cron doesn't output anything, otherwise:
cron will assume there is an error
you will not see the error if any
A solution for this is to use
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh 2>&1 >> /var/log/rdiff-backup-cron.log;
Second of all, it appears you are losing env variables when executing via cron, try adding the env settings to your script
#!/opt/bin/bash
. /root/.profile
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log
if /root/.profile doesn't, exist try adding . /root/.bashrc or /etc/profile
I hope this helps.

Resources