I am trying to deploy my first upstart job to reload my server if it crashes but no matter what I put into the job it gives error of unknown job. Even if I have the job doing nothing only comments.
This is the job. only few lins. I'm using emacs
# comment
description "golf"
script
export HOME="/root"
exec sudo -u
ubuntu /usr/local/bin/node
/home/ubuntu/golf/node/lib/db-server1.js 2>&1
>> /var/log/golf.log
end script
The paths are correct but I always get error unknown job Golf, even if I remove everything. Thanks for any advise
Most likely, the service name does not match what you expect. Most notably, the upper-case G is quite unusual. Are you certain that there's a service definition in /etc/init.d/Golf.conf, as opposed to /etc/init.d/golf.conf (note the lower-case g)?
Related
I was very hesitant to post here since this question have been popped out a ton, but I've tried pretty much everything I've found on the internet in last 2 days. I am on my first week using Linux and its been a wild ride. (Ubuntu 20.04 LTS)
So I made node app which opens browser -> logins to our company webapp and writes down my work hours automatically, I want to run it on computer reboot since I mark my hours when I get home. This way I dont forget to mark them. (note: I have also tried running it on every minute, or the next coming minute just to be sure its not about #reboot command)
These are some of the different options I've tried. Cant really remember all since I've been trying, I belive over 100 different variants now. Also on the codes below, I've also tried with either full paths or just ex. bin/node etc.
#reboot cd /home/sepi/Documents/MyProjects/eas_app && /usr/local/bin/node index.js
#reboot usr/local/bin/node /home/sepi/Documents/MyProjects/eas_app/index.js
#reboot /bin/node /home/sepi/Documents/MyProjects/eas_app/index.js
which node gives: /usr/local/bin/node
First check where is your node binary by
$ whereis node
and use that path only in cronjob.
To resolve any cronjob first thing you need to do is to redirect stdout and stderr in a log file.
#reboot /bin/node /home/sepi/Documents/MyProjects/eas_app/index.js > out.log 2>&1
This way you will understand if is there any library or path issue.
If you are still facing the issue then just add the below lines in your crontab
SHELL=/bin/bash
BASH_ENV="/home/user/.bashrc"
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
In BASH_ENV instead of user add your username, check by whoami
Note: SHELL and PATH entry can be found by echo $SHELL and echo $PATH
respectively.
Also first add time base cron to test if it is working then add cronjob for reboot scenarios.
I have a cron job on an Ubuntu 10.4 server that stopped running for no apparent reason. (The job ran for months and has not been changed.) I am not a *nix guru so I plead ignorance if this is a simple problem. I can't find any reason or indication why this job would have stopped. I've restarted the server without success. Here's the job:
# m h dom mon dow command
0 * * * * java -jar /home/mydir/myjar.jar >>/home/mydir/crontaboutput.txt
The last line in the output file shows that the program ran on 8/29/2012. Nothing after that.
Any ideas where to look?
There should be something in your system log when the job was run. The other thing you could >try is to add 2>&1 to the job to see any errors in your text file. – Lars Kotthoff yesterday
This proved to be the key piece of information - adding 2>&1 allowed me to capture an error that wasn't getting reported anywhere else. The completed command line then looked like:
java -jar /home/mydir/myjar.jar 2>&1 >>/home/mydir/crontaboutput.txt
Perhaps your cron daemon has stopped, or changed configuration (i.e. /etc/cron.deny). I suggest to make a shell script, and running it from crontab. I also suggest to run thru your crontab some other program (just for testing) at some other time. You can use the logger command in your shell script for syslog. Look into system log files.
Accepted answer is correct, (i.e, check the error logs) which pointed out the error in my case. Besides check for the following issues
include('../my_dir/my_file.php) may work from url but it will not work when cron job is run, will spit out error.
$_SERVER variables does not work inside cron os if you are using $_SERVER['DOCUMENT_ROOT'], it will not be recognized and you will have an error in the cron job.
Make sure to test the cron and have it run, send an email etc to make sure it is run.
I have build a form (using JavaScript, jQuery, PHP and HTML) that makes it easy for the non-technical people to compose and fire the command that creates an ISO image containing the CentOS Linux and a company application built along with it. Here is the actual command with sample variables.
./test.pl --verbose --output tvmTEST.iso --virtual --isv 4.1.5.1.4147.8.0 --64bit --netproto static --hostname tvmTEST --address 192.168.5.235 --netmask 255.255.255.0 --gateway 192.168.5.252 --nameserver 192.168.5.21,192.168.5.2
This exact command works properly when fired from the shell while logged in as root and the ISO gets created successfully. However, it doesn't work through the GUI. The form composes the command properly and passes it to the PHP code where I am calling the perl program. I had the command composed by PHP program tested in the shell and it created the ISO! Here is the perl program that builds the ISO image. It is being run as user apache when fired through the form. But, it dies at line # 665 where it says:
system("sudo mount -o loop $c{centosiso} $mp") and die;
I tried printing the string passed to system() above and it printed:
sudo mount -o loop /tmp/test.pl-cache/CentOS-5.4-x86_64-bin-1of7.iso /mnt/CentOS
So, I tried firing this command through the shell and it actually mounted the ISO! However, the permissions for /mnt/CentOS changed to 755. Its not clear to me WHY? Note that I tested it with and without sudo in that line above.
And, prior to this, the permissions for /mnt/CentOS were set to 777 and owner was set as apache! Is these permissions the reason why my form isn't working? Am I on the right track?
You might also try using qx( … ) around the command instead of using the system function.
This operator tells Perl to run the command in a shell. I have had it happen that system would fail where the same command would run with qx.
I preferred that solution to trying to find out why system command was being obstinate.
One nice difference is that while system will return the exit value of the command, and qx returns the output of the command, so you can assign the result to a variable and print that for debugging purposes.
I'm trying to run a script which stops and starts Tomcat on linux.
When I run it from the command line it works fine. But it does not seem to work when I run the same script from the "Execute Shell" build step in a Jenkins/Hudson job. Jenkins doesn't report any errors but if I try going to the tomcat page then I get a page not found error.
So Jenkins seems able to stop the server, but not bringing it back up.
I'd be grateful for any help.
Try unsetting the BUILD_ID in your 'shell execute' block. You might even not need to use nohup in this case
BUILD_ID=
./your_hudson_script_that_starts_tomcat.sh
Without seeing your script it is difficult to give an exact answer. However you could try adding the following to the start of your script (assuming it is a bash script):
# Trace executed commands.
set -x
# Save stdout / stderr in files
exec >/tmp/my_script.stdout
exec 2>/tmp/my_script.stderr
You could also try adding
set -e
to make the shell exit immediately if a command returns an error status.
If it looks as though Hudson is killing off Tomcat then you might want to run it within nohup (if you're not already doing that):
nohup bin/startup.sh >/dev/null 2>&1 &
I'm struggling trying to debug a cron job which isn't working correctly. The cron job calls a shell script which should unrar a rar file - this works correctly when i run the script manually, but for some reason it's not working via cron. I am using the absolute file path and have verified that the path is correct. Has anyone got any ideas why this could be happening?
Well, you already said that you have used absolute paths, so the number one problem is dealt with.
Next to check are permissions. Which user is the cron job run as? Does it have all the permissions necessary?
Then, a little trick: if you have a shell script that fails and it's not run in a terminal I like to redirect the output of it to some files. Right at the start of the script, add:
exec &>/tmp/my.log
This will redirect STDOUT and STDERR to /tmp/my.log. Then it might also be a good idea to also add the line:
set -x
This will make bash print which command it's about to execute, and at what nesting level.
Happy debugging!
The first thing to check when cron jobs fail is to see if the full environment is available to the script you are trying to execute. In other words, you need to realize that a job executed via cron runs as a detached process meaning it is not associated with a login environment. Therefore whenever you try to debug a cron job that works when you execute manually, you need to be sure the same environment is available to the cronjob as is available to you when you execute it manually. This include any PATH settings, and other envvars that the script may depend on.
For me, the problem was a different shell interpreter in crontab.