I have some cronjobs that I always used and worked fine. But now, trying to move everything to Docker containers, I'm running into these errors:
/usr/bin/service: 127: /usr/bin/service: stop: not found
/usr/bin/service: 128: exec: start: not found
They occur when executing things like "service restart nginx" these cronjobs. Note that the same commands work fine outside the cronjobs.
The PATH is correctly set in /etc/crontab . Adding it to the individual cronfiles in /etc/cron.d doesn't work either. I also tried changing SHELL=/bin/sh to SHELL=/bin/bash (even though it's insecure, but wanted to try) in /etc/crontab, didn't work.
Any ideas?
I solved it changing the command from
"service start mysql"
to
"/sbin/start mysql &"
Good luck
Enric
I am not sure, but I think that the problem is in paths. I believe you want to run /usr/sbin/service and not /usr/bin/service. You may try to specify full path for service instead of just service.
Related
Oh god why is this so hard. I've now spent 3 days trying to get this seemingly simple crap to work.
I need it to:
- npm install on CI server (works)
- run tests (works)
- build angular frontend (works)
- ship code to server via rsync (works)
- ssh into server (works)
- - and npm install (doesn't work. dies because of npm warnings, I think)
- - restart pm2 process (doesn't work as there's no elegant way to say start or restart)
At the deploy step, I have this script in the codeship UI
rsync -avz --exclude 'node_modules' ~/clone/
root#xxx.xxx.xxx.xxx:/root/my-project/
ssh root#xxx.xxx.xxx.xxx cd /root/my-project && bash ./postDeploy.sh
Then the postDeploy.sh script is this:
#!/bin/sh
export PATH=$PATH:/usr/local/bin
npm install --silent &> /dev/null
/usr/local/bin/pm2 stop --silent keystone &> /dev/null
/usr/local/bin/pm2 start keystone.js 2> /dev/null
I'm trying to swallow errors with this trick. &> /dev/null
There are a few vulnerabilities in the project that are unfortunately deep inside a core module and not fixable by me so I need npm to just be quiet in this case.
Then there's the PM2 thing which is slightly annoying. I need to issue a stop command, but if the service is not running it will fail, so again I need to swallow errors. The start command is probably fine.
I think maybe what's happening now is that because I swallow all output codeship's script runner assumes it fails?
I have tried to use the half-baked debug tool, but it magically asks me for a password when I try to login in... Eh?
Also #codeship it would be amazing if 80% of the helpful articles that google has indexed didn't lead to dead pages on your site...
I have tried to use the half-baked debug tool, but it magically asks me for a password when I try to login in... Eh?
I'd say this was a correct instinct. There are too many possible scenarios for why you're coming across these unintended behaviors and nothing short of running the build live with a ssh debug session will likely get to the bottom of that.
Please see our documentation section for troubleshooting password prompts for ssh debug sessions.
If an ssh debug session doesn't solve your situation, then please reach out to us at support#codeship.com with your build url and we'll take a closer look.
When I run my nodejs application on my Ubuntu Linux server using node server.js it works correctly, and outputs the value of the environment variable $db using process.env.db.
However, the app breaks when running sudo pm2 start server.js, seeing the environment variable as undefined.
I've tried adding the variable in the following files:
etc/environment: db="hello"
~/.ssh/environment : db="hello"
~/.bashrc : export db="hello"
I've also rebooted and run source ~/.bashrc to ensure the variable is available.
I think I've tried everything mentioned here, I don't know what else to do:
https://unix.stackexchange.com/questions/117467/how-to-permanently-set-environmental-variables
https://github.com/Unitech/pm2/issues/867
Why does an SSH remote command get fewer environment variables then when run manually?
https://serverfault.com/questions/389601/etc-environment-export-path
Note that saying source ~/.bashrc you are loading the variables on your current user. However, when you say sudo ... you are running with the user root, so this won't change.
What you can do is to use sudo with -E:
sudo -E pm2 start server.js
From man sudo:
-E, --preserve-env
Indicates to the security policy that the user wishes to reserve their
existing environment variables. The security policy may eturn an error
if the user does not have permission to preserve the environment.
Please refer to this thread https://github.com/Unitech/pm2/issues/204
Seems like your environment variables get's cached.
I deleted the pm2 process and started it again as normal. simply restarting the process didn't work though.
Couldn't seem to find a direct answer around here.
I'm not sure if I should run ./myBinary as a Cron process or if I should run "go run myapp.go"
What's an effective way to make sure that it is always running?
Sorry I'm used to Apache and Nginx.
Also what are best practices for deploying a Go app? I want everything (preferably) all served on the same server. Just like how my development environment is like.
I read something else that used S3, but, I really don't want to use S3.
Use the capabilities your init process provides. You're likely running system with either Systemd or Upstart. They've both got really easy descriptions of services and can ensure your app runs with the right privileges, is restarted when anything goes down, and that the output is are handled correctly.
For quick Upstart description look here, your service description is likely to be just:
start on runlevel [2345]
stop on runlevel [!2345]
setuid the_username_your_app_runs_as
exec /path/to/your/app --options
For quick Systemd description look here, your service is likely to be just:
[Unit]
Description=Your service
[Service]
User=the_username_your_app_runs_as
ExecStart=/path/to/your/app --options
[Install]
WantedBy=multi-user.target
You can put it in an inifiny loop, such as:
#! /bin/sh
while true; do
go run myapp.go
sleep 2 # Just in case
done
Hence, once the app dies due some reason, it will be run again.
You can put it in a script and run it in background using:
$ nohup ./my-script.sh >/dev/null 2>&1 &
You may want to go for virtual terminal utility like screen here. Example:
screen -S myapp # create screen with name myapp
cd ... # to your app directory
go run myapp.go # or go install and then ./myappfrom go bin dir
Ctrl-a+d # to go out of screen
If you want to return to the screen:
screen -r myapp
EDIT: this solution will persist the process when you go out of terminal, but won't restart it when it'll crash.
I'm deploying a node web application as an upstart service using grunt and monitoring it using monit. However:
My upstart and monit configuration duplicate each other a little bit
Upstart doesn't do variable expansion inside env stanzas
I can't find a way to configure monit dynamically (ala upstart .override files)
My question
This means I'm looking for a grunt plugin or other tool that I can use to generate the uptstart .conf and monit conf.d/ files. Can you please help me find one (or suggest a better way of robustly running my node web app)?
A rather crude solution?
To be honest an underscore template of the upstart and monit files would probably be sufficient, and that's what I'll wrap up into a grunt plugin if there isn't a ready-made solution, but this feels like a problem that other people must have run into as well so I imagine there's a solution out there, I just can't find it.
Detail
A bit more detail to illustrate the three problems. My upstart conf file looks like this:
setuid node
setgid node
# ...
script
mkdir -p /home/node/.my-app
echo $$ > /home/node/.my-app/upstart.pid
/usr/local/bin/node /var/node/my-app/server.js >> /var/node/my-app/logs/console.log 2>&1
end script
# ...
And my monit configuration looks like this:
check process node with pidfile /home/node/.my-app/upstart.pid
start program = "/sbin/start my-app" with timeout 60 seconds
stop program = "/sbin/stop my-app"
if failed host localhost port 17394 protocol http
and request "/monit.html"
then restart
if 3 restarts within 5 cycles then timeout
As you can see, the PID file path and config directory is duplicated across the two (problem 1) and I'd love to parameterise the host, port and request URL in the monit file (problem 3).
For problem 2 (no variable expansion in upstart's env stanza, bug report here) there are a couple of workarounds that work for variables used inside *script blocks, which are interpreted as bash scripts, but they don't seem to work in the conf file itself. I think this makes it impossible to specify the user id the app should run as in a configuration file?
Those techniques I just mentioned:
Method 1: Don't use env - echo the variables in the pre-script to a file and then source it later
Method 2: Duplicate the variable expansion in all the script bodies where it is needed
Method 3: Store the variables in a file and cat them in using command substitution
...or suggest a better way of robustly running my node web app
Use PM2 by Unitech instead.
I run all my node apps using PM2. It works perfectly and is easy to configure. It has built-in functionality for autogenerating of startup script. It gives you logging, monitoring and easy maintenance.
Here is a good article showing off the highlights.
Install
npm install -g pm2
Start app
pm2 start app.js
Show running apps
pm2 list
Make a startup script
pm2 startup [ubuntu|centos|systemd]
More details in the readme on their github page
I have a python script which manages an Erlang daemon. Everything works fine when used through a shell once the system is initialized.
Now, when I included the same script under "/etc/init.d" and with the symlinks properly set in "/etc/rcX.d", the python script still works but my Erlang daemon fails to start and leaves no discernible traces (e.g. crash_dump, dmesg etc.)
I also tried setting the environment variable "HOME" through 'erl -env HOME /root' and still no luck.
Any clues?
To manually run the script the same way the system does, use service daemon start if you have that command, or else try
cd /
env -i LANG="$LANG" PATH="$PATH" TERM="$TERM" /etc/init.d/daemon start
That forces the script to run with a known, minimal environment just like it would at startup.
Thanks for this answer - I was having a devil of a time starting the "Alice" RESTful interface to rabbitmq on startup. The key was using 'env HOME=/root /path/to/alice/startup/script' in my init script.