I need running a (bash)Script after each start from my EC2.
The machine stops 90% a day - after wake up - a script should run.
I tried to push it in the USER-DATA - but this only runs on init.
After this I followed up here: https://aws.amazon.com/de/premiumsupport/knowledge-center/execute-user-data-ec2/
but this didn't also work - because to stop a machine and start a machine seems to be no reboot
I also implement a simple output in the rc.local but also: nothing happens.
Is there a way?
So we talk to switch from this Instance-State
to this:
You could use the oneshot feature of systemd
Write scripts for mystart.sh and mystop.sh, chmod/chown them
/etc/systemd/system/mystart.service. Note that we must specify RemainAfterExit=true so that systemd considers the service as active after the setup action is successfully finished.
[Unit]
Description=mystart
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/mystart.sh
RemainAfterExit=true
ExecStop=/usr/local/bin/mystop.sh
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Reload the systemd (systemctl reload) and try stopping and starting to test it
Cloud-Init (which runs the User Data script on first boot) can also run scripts:
On every boot
On the next boot only
So, simply install your script in this directory:
/var/lib/cloud/scripts/per-boot/
Each time the instance is booted (started), the script will run. This is a great way to trigger a batch process on the instance.
See also: Auto-Stop EC2 instances when they finish a task - DEV Community
I want to auto restart my application "Fiware IoT Agent" if it stopped, the problem is that it depends of Mongo Db Data Base and the Mosquitto broker. My OS is centOS 7
Here is the commands that I use to launch my three application in the following order:
*Mongo:
/usr/local/iot/mongodb-linux-x86_64-3.0.5/bin/mongod --dbpath /usr/local/iot/mongodb-linux-x86_64-3.0.5/data/db$
*Mosquitto broker
/usr/sbin/mosquitto -c /etc/iot/mosquitto.conf &
pid=$!
echo $pid > /var/run/iot/mosquitto.pid
Iot Agent:
than I start my application using this command
export LD_LIBRARY_PATH=/usr/local/iot/lib
/usr/local/iot/bin/iotagent -i 192.168.1.11 -p 80 -v DEBUG -d /usr/local/iot/lib -c /etc/iot/config.json
how can I start my application if it stopped known that it depends of the other two application? If for example Mongo DB stopped, I must be able to restart it and then to restart my application.
CentOS 7 uses systemd. You can create systemd service for each of your applications and specify dependencies between them. And specify "Restart=always" for service which need to be auto restarted.
You can create your own watch dog code. When you start your application get the pid of the process and the pid of mongo DB.
Every couple of second like 10 seconds check that the pid of both process still exist, or you can also make the programs touch a file every couple of seconds as well then check the file modification time to see if the programs are still alive.
If the program hasn't touched the file or if you go jus the pid route and the pid doesn't exist. Then the program has died.
Restart the program and get the new pid and go about again in a forever while loop.
I'm trying to create a service that will trigger every time a raspberry pi boots. Currently the service runs a really simple script that sends a POST request to a web service endpoint I control. I can trigger said script manually and that part all works perfectly.
I'm struggling with the next step which is to get that script to run after the pi has finished booting. I also need to be able to get it to run without a user logging in.
CURL Script (algiers-startup.local)
#! /bin/bash
echo "Attempting CURL Request"
curl --data "param1=value1¶m2=value2" http://10.68.159.28:3000/device
Systemd Service
[Unit]
Description=Algiers RaspberryPi Startup
After=network.target
Before=getty#tty1.service
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/algiers-startup.local
TimeoutSec=30
StandardOutput=tty
RemainAfterExit=no
[Install]
WantedBy=multi-user.target
I see no errors or outputs in the console, no hint that anything has happened at all.
I’ll assume your machine is already set up with Systemd. It’s controlled primarily through the systemctl command. I alias it as such since it’s awful to type all the time:
alias sc=systemctl
alias ssc='sudo systemctl'
You just need to “enable” your service to have it run at boot:
sc enable algiers-startup
I’m not sure what distro you’re using, but on Arch and CentOS, you’ll want algiers-startup to live down in /usr/lib/systemd/system/.
You can test your service with sc start algiers-start. journalctl can show you what’s happening.
I have recently completed the Wiki web development tutorial (http://golang.org/doc/articles/wiki/). I had tons of fun and I would like to experiment more with the net/http package.
However, I noticed that when I run the wiki from a console, the wiki takes over the console. If I close the console terminal or stop the process with CTRL+Z then the server stops.
How can I get the server to run in the background? I think the term for that is running in a daemon.
I'm running this on Ubuntu 12.04. Thanks for any help.
Simple / Usable things first
If you want a start script without much effort (i.e. dealing with the process, just having it managed by the system), you could create a systemd service. See Greg's answer for a detailled description on how to do that.
Afterwards you can start the service with
systemctl start myserver
Previously I would have recommended trying xinetd or something similar for finer granuarlity regarding resource and permission management but systemd already covers that.
Using the shell
You could start your process like this:
nohup ./myexecutable &
The & tells the shell to start the command in the background, keeping it in the job list.
On some shells, the job is killed if the parent shell exits using the HANGUP signal.
To prevent this, you can launch your command using the nohup command, which discards the HANGUP signal.
However, this does not work, if the called process reconnects the HANGUP signal.
To be really sure, you need to remove the process from the shell's joblist.
For two well known shells this can be achieved as follows:
bash:
./myexecutable &
disown <pid>
zsh:
./myexecutable &!
Killing your background job
Normally, the shell prints the PID of the process, which then can be killed using the kill command, to stop the server. If your shell does not print the PID, you can get it using
echo $!
directly after execution. This prints the PID of the forked process.
You could use Supervisord to manage your process.
Ubuntu? Use upstart.
Create a file in /etc/init for your job, named your-service-name.conf
start on net-device-up
exec /path/to/file --option
You can use start your-service-name, as well as: stop, restart, status
This will configure your service using systemd, not a comprehensive tutorial but rather a quick jump-start of how this can be set up.
Content of your app.service file
[Unit]
Description=deploy-webhook service
After=network.target
[Service]
ExecStart=/usr/bin/go webhook.go
WorkingDirectory=/etc/deploy-webhook
User=app-svc
Group=app-svc
Restart=always
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=deploy-webhook-service
PrivateTmp=true
Environment=APP_PARAM_1=ParamA
Environment=APP_PARAM_2=ParamB
[Install]
WantedBy=multi-user.target
Starting the Service
sudo systemctl start deploy-webhook.service
Service Status
sudo systemctl status deploy-webhook.service
Logs
journalctl -u deploy-webhook -e
After you press ctrl+z (putting the current task to sleep) you can run the command bg in the terminal (stands for background) to let the latest task continue running in the background.
When you need to, run fg to get back to the task.
To get the same result, you can add to your command & at the end to start it in the background.
To add to Greg's answer:
To run the Go App as a service you need to create a new service unit file.
However, the App needs to know where Go is installed. The easiest way to lookup that location is by running this command:
which go
which gives you an output like this:
/usr/local/go/bin/go
With this piece of information, you can create the systemd service file. Create a file named providus-app.service in the /etc/systemd/system/ using the command below:
sudo touch /etc/systemd/system/providus-app.service
Next open the newly created file:
sudo nano /etc/systemd/system/providus-app.service
Paste the following configuration into your service file:
[Unit]
Description=Providus App Service
After=network.target
[Service]
Type=forking
User=deploy
Group=deploy
ExecStart=/usr/local/go/bin/go run main.go
WorkingDirectory=/home/deploy/providus-app
Restart=always
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=providus-app-service
PrivateTmp=true
[Install]
WantedBy=multi-user.target
When you are finished, save and close the file.
Next, reload the systemd daemon so that it knows about our service file:
sudo systemctl daemon-reload
Start the Providus App service by typing:
sudo systemctl restart providus-app
Double-check that it started without errors by typing:
sudo systemctl status providus-app
And then enable the Providus App service file so that Providus App automatically starts at boot, that is, it can start on its own whenever the server restarts:
sudo systemctl enable providus-app
This creates a multi-user.target symlink in /etc/systemd/system/multi-user.target.wants/providus-app.service for the /etc/systemd/system/providus-app.service file that you created.
To check logs:
sudo journalctl -u providus-app
What is the best way to deploy Node.js?
I have a Dreamhost VPS (that's what they call a VM), and I have been able to install Node.js and set up a proxy. This works great as long as I keep the SSH connection that I started node with open.
2016 answer: nearly every Linux distribution comes with systemd, which means forever, monit, PM2, etc. are no longer necessary - your OS already handles these tasks.
Make a myapp.service file (replacing 'myapp' with your app's name, obviously):
[Unit]
Description=My app
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
# Note Debian/Ubuntu uses 'nogroup', RHEL/Fedora uses 'nobody'
Group=nogroup
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target
Note if you're new to Unix: /var/www/myapp/app.js should have #!/usr/bin/env node on the very first line and have the executable mode turned on chmod +x myapp.js.
Copy your service file into the /etc/systemd/system folder.
Tell systemd about the new service with systemctl daemon-reload.
Start it with systemctl start myapp.
Enable it to run on boot with systemctl enable myapp.
See logs with journalctl -u myapp
This is taken from How we deploy node apps on Linux, 2018 edition, which also includes commands to generate an AWS/DigitalOcean/Azure CloudConfig to build Linux/node servers (including the .service file).
Use Forever. It runs Node.js programs in separate processes and restarts them if any dies.
Usage:
forever start example.js to start a process.
forever list to see list of all processes started by forever
forever stop example.js to stop the process, or forever stop 0 to stop the process with index 0 (as shown by forever list).
I've written about my deployment method here: Deploying node.js apps
In short:
Use git post-receive hook
Jake for the build tool
Upstart as a service wrapper for node
Monit to monitor and restart applications it they go down
nginx to route requests to different applications on the same server
pm2 does the tricks.
Features are: Monitoring, hot code reload, built-in load balancer, automatic startup script, and resurrect/dump processes.
You can use monit, forever, upstart or systemd to start your server.
You can use Varnish or HAProxy instead of Nginx (Nginx is known not to work with websockets).
As a quick and dirty solution you can use nohup node your_app.js & to prevent your app terminating with your server, but forever, monit and other proposed solutions are better.
I made an Upstart script currently used for my apps:
description "YOUR APP NAME"
author "Capy - http://ecapy.com"
env LOG_FILE=/var/log/node/miapp.log
env APP_DIR=/var/node/miapp
env APP=app.js
env PID_NAME=miapp.pid
env USER=www-data
env GROUP=www-data
env POST_START_MESSAGE_TO_LOG="miapp HAS BEEN STARTED."
env NODE_BIN=/usr/local/bin/node
env PID_PATH=/var/opt/node/run
env SERVER_ENV="production"
######################################################
start on runlevel [2345]
stop on runlevel [016]
respawn
respawn limit 99 5
pre-start script
mkdir -p $PID_PATH
mkdir -p /var/log/node
end script
script
export NODE_ENV=$SERVER_ENV
exec start-stop-daemon --start --chuid $USER:$GROUP --make-pidfile --pidfile $PID_PATH/$PID_NAME --chdir $APP_DIR --exec $NODE_BIN -- $APP >> $LOG_FILE 2>&1
end script
post-start script
echo $POST_START_MESSAGE_TO_LOG >> $LOG_FILE
end script
Customize all before #########, create a file in /etc/init/your-service.conf and paste it there.
Then you can:
start your-service
stop your-service
restart your-service
status your-service
I've written a pretty comprehensive guide to deploying Node.js, with example files:
Tutorial: How to Deploy Node.js Applications, With Examples
It covers things like http-proxy, SSL and Socket.IO.
Here's a longer article on solving this problem with systemd: http://savanne.be/articles/deploying-node-js-with-systemd/
Some things to keep in mind:
Who will start your process monitoring? Forever is a great tool, but it needs a monitoring tool to keep itself running. That's a bit silly, why not just use your init system?
Can you adequately monitor your processes?
Are you running multiple backends? If so, do you have provisions in place to prevent any of them from bringing down the others in terms of resource usage?
Will the service be needed all the time? If not, consider socket activation (see the article).
All of these things are easily done with systemd.
If you have root access you would better set up a daemon so that it runs safe and sound in the background. You can read how to do just that for Debian and Ubuntu in blog post Run Node.js as a Service on Ubuntu.
Forever will do the trick.
#Kevin: You should be able to kill processes fine. I would double check the documentation a bit. If you can reproduce the error it would be great to post it as an issue on GitHub.
Try this: http://www.technology-ebay.de/the-teams/mobile-de/blog/deploying-node-applications-with-capistrano-github-nginx-and-upstart.html
A great and detailed guide for deploying Node.js apps with Capistrano, Upstart and Nginx
As Box9 said, Forever is a good choice for production code. But it is also possible to keep a process going even if the SSH connection is closed from the client.
While not necessarily a good idea for production, this is very handy when in the middle of long debug sessions, or to follow the console output of lengthy processes, or whenever is useful to disconnect your SSH connection, but keep the terminal alive in the server to reconnect later (like starting the Node.js application at home and reconnecting to the console later at work to check how things are going).
Assuming that your server is a *nix box, you can use the screen command from the shell to do keep the process running even if the client SSH is closed. You can download/install screen from the web if not already installed (look for a package for your distribution if Linux, or use MacPorts if OS X).
It works as following:
When you first open the SSH connection, type 'screen' - this will start your screen session.
Start working as normal (i.e. start your Node.js application)
When you are done, close your terminal. Your server process(es) will continue running.
To reconnect to your console, ssh back to the server, login, and enter 'screen -r' to reconnect. Your old console context will pop back ready for you to resume using it.
To exit screen, while connected to the server, type 'exit' on the console prompt - that will drop you onto the regular shell.
You can have multiple screen sessions running concurrently like this if you need, and you can connect to any of it from any client. Read the documentation online for all the options.
Forever is a good option for keeping apps running (and it's npm installable as a module which is nice).
But for more serious 'deployment' -- things like remote management of deploying, restarting, running commands etc -- I would use capistrano with the node extension.
https://github.com/loopj/capistrano-node-deploy
https://paastor.com is a relatively new service that does the deploy for you, to a VPS or other server. There is a CLI to push code. Paastor has a free tier, at least it did at the time of posting this.
In your case you may use the upstart daemon. For a complete deployment solution, I may suggest capistrano. Two useful guides are How to setup Node.js env and How to deploy via capistrano + upstart.
Try node-deploy-server. It is a complex toolset for deploying an application onto your private servers. It is written in Node.js and uses npm for installation.