I need running a (bash)Script after each start from my EC2.
The machine stops 90% a day - after wake up - a script should run.
I tried to push it in the USER-DATA - but this only runs on init.
After this I followed up here: https://aws.amazon.com/de/premiumsupport/knowledge-center/execute-user-data-ec2/
but this didn't also work - because to stop a machine and start a machine seems to be no reboot
I also implement a simple output in the rc.local but also: nothing happens.
Is there a way?
So we talk to switch from this Instance-State
to this:
You could use the oneshot feature of systemd
Write scripts for mystart.sh and mystop.sh, chmod/chown them
/etc/systemd/system/mystart.service. Note that we must specify RemainAfterExit=true so that systemd considers the service as active after the setup action is successfully finished.
[Unit]
Description=mystart
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/mystart.sh
RemainAfterExit=true
ExecStop=/usr/local/bin/mystop.sh
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Reload the systemd (systemctl reload) and try stopping and starting to test it
Cloud-Init (which runs the User Data script on first boot) can also run scripts:
On every boot
On the next boot only
So, simply install your script in this directory:
/var/lib/cloud/scripts/per-boot/
Each time the instance is booted (started), the script will run. This is a great way to trigger a batch process on the instance.
See also: Auto-Stop EC2 instances when they finish a task - DEV Community
Related
We have an embedded target environment (separate from out host build environment) in which systemd is running but not cron.
We also have a script which, under most systems, I would simply create a cron entry to run it every five minutes.
Now I know how to create a service under systemd but this script is a one-shot that exits after it's done its work. What I'd like to do is have it run immediately on boot (after the syslog.target, of course) then every five minutes thereafter.
After reading up on systemd timers, I created the following service file is /lib/systemd/system/xyzzy.service:
[Unit]
Description=XYZZY
After=syslog.target
[Service]
Type=simple
ExecStart=/usr/bin/xyzzy.dash
and equivalent /lib/systemd/system/xyzzy.timer:
[Unit]
Description=XYZZY scheduler
[Timer]
OnBootSec=0min
OnUnitActiveSec=5min
[Install]
WantedBy=multi-user.target
Unfortunately, when booting the target, the timer does not appear to start since the output of systemctl list-timers --all does not include it. Starting the timer unit manually seems to work okay but this is something that should be run automatically with user intervention.
I would have thought the WantedBy would ensure the timer unit was installed and running and would therefore start the service periodically. However, I've noticed that the multi-user.target.wants directory does not actually have a symbolic link for the timer.
How is this done in systemd?
The timer is not active until you actually enable it:
systemctl enable xyzzy.timer
If you want to see how it works before rebooting, you can also start it:
systemctl start xyzzy.timer
In terms of doing that for a separate target environment where you aren't necessarily able to easily run arbitrary commands at boot time (but do presumably control the file system content), you can simply create the same symbolic links (in your development area) that the enable command would do.
For example (assuming SYSROOT identifies the root directory of the target file system):
ln -s ${SYSROOT}/lib/systemd/system/xyzzy.timer
${SYSROOT}/lib/systemd/system/multi-user.target.wants/xyzzy.timer
This will effectively put the timer unit into an enabled state for the multi-user.target, so systemd will start it with that target.
Also, normally your custom files would be stored in /etc/systemd/system/. The equivalent lib directory is intended to host systemd files installed by packages or the OS.
If it's important that your cron job run precisely every 5 minutes, you should check the accuracy because systemd's monotonic timers can slip over time
I have packaged a Debian file of our software. Now there is a .sh script that needs to be started to run the program/software. This .sh script actually runs a Django server and few more services.
To actually start this application, we need to run the .desktop file in the menu. This .desktop file in the menu is associated with the .sh script mentioned above. This prompts the terminal and asks for the password. Once the password is given, this will start the services and the terminal stays active.
To close this service completely, we need to kill the process by finding the PID of the process and killing it from the terminal. But now I want to kill this process when I close the terminal.
How can I do that?
If you are trying to create a service (some program that runs in the background), you should create use your system's mechanism for this.
The traditional one, would be a scrip in /etc/init.d/, a more modern approach is to use systemd.
E.g. a file /etc/systemd/system/myservice.system
[Unit]
Description=My Service
[Service]
Type=simple
# you could run the service as a special user
#User=specialuser
WorkingDirectory=/var/lib/myservice/
# execute this before starting the actual script
#ExecStartPre=/usr/lib/myservice/bin/prestart.sh
ExecStart=/usr/bin/myservice
Restart=on-failure
[Install]
WantedBy=multi-user.target
You can then start/stop the service (as root) using:
systemctl start myservice
resp.
systemctl stop myservice
You can have dependency chains of services, so starting myservice will automatically start myhelper1 and myhelper2.
Checkout the manpage systemd.unit.5
When the controlling terminal is closed, the foreground process group should receive a SIGHUP signal. If your target process is already expected to be in the foreground, then it may be that it is catching or ignoring SIGHUP (the default behavior for a process receiving that signal is to terminate).
I have a systemd service for my spring boot application connected to consul server, behind haproxy. consul provides consul-template to automatically update the service location in haproxy configuration file via consul-template command.
consul-template takes a template file and writes to the final haproxy configuration file and then reload the haproxy.
Now, consul-template process needs to run in background always along with my application, so that as the application comes up, it can detect new application startup and update its location in the configuration file.
Here is my systemd service file for this.
[Unit]
Description=myservice
Requires=network-online.target
After=network-online.target
[Service]
Type=forking
PIDFile=/home/dragon/myservice/run/myservice.pid
ExecStart=/home/dragon/myservice/bin/myservice-script start
ExecReload=/home/dragon/myservice/bin/myservice-script reload
ExecStop=/home/dragon/myservice/bin/myservice-script stop
ExecStartPost=consul-template -template '/etc/haproxy/haproxy.cfg.template:/etc/haproxy/haproxy.cfg:sudo systemctl reload haproxy'
User=dragon
[Install]
WantedBy=multi-user.target
Now, when I start systemctl start myservice, my application starts and the call to consul-template also works, but consul-template process doesn't go in background. I have to press Ctl+C and then systemctl comes back and I have both my application and consul-template process running.
Is there way to run the consul-template process in background specified in ExecStartPost?
I was trying to add & at the end of the ExecStartPost command, but then consul-template complains that it is an additional invalid argument and it fails.
I was also trying to make the command as /bin/sh -c "consul-template command here...", but then this also doesn't work. Even nohup in this command wasn't working.
Any help is really appreciated.
A workaround would be to have a bash file as your entrypoint, add all you need in there, then it will all magically work
I was trying to accomplish the same task. I wanted to fire off some HTTP requests to Tomcat once the service had started, so that I could warmup our servers ahead of the first user request.
I went through a lot of trial and error with using trying to use ExecStartPost to fire off an async process, but actually worked. By calling a shell script, I could trigger off background processes, but from my testing Systemd appears to kill the process thread when ExecStartPost finishes, so any child processes end up getting killed too. I tried various combinations of using &, setsid, nohup, etc, even some Perl to try and trigger off the an executable in it's own thread, but as soon as the shell script exite from ExecStartPost any processes running where killed. It's possible there's some solution that would work using ExecStartPost, but I couldn't find it.
However, what did work is creating a new service (like #divinedragon mentions) which piggy backs off the service I wanted to monitor (in this case Tomcat).
Since it took me a little research to get something working the way I wanted, I wanted to share my solution in case it helps someone.
The first step is to create a new service (e.g. /usr/lib/systemd/system/tomcat-service-listener.service):
[Unit]
Description=Tomcat start/stop event listener
# make sure to stop the service when Tomcat stops
BindsTo=tomcat.service
# waits for both Nginx & Tomcat to be started before this service is started
After=nginx.service tomcat.service
[Service]
Type=oneshot
ExecStart=/path/to/your/script.sh start
ExecStop=/path/to/your/script.sh stop
RemainAfterExit=yes
TimeoutStartSec=300
[Install]
# When the service is enabled, forces this service to start when Tomcat is started
WantedBy=tomcat.service
Some notes on what is happening here:
The BindsTo make sure the service gets stopped when Tomcat is stopped. This triggers the ExecStop command.
The After make sure that on server reboot, this service does not start until both Nginx & Tomcat have started.
The WantedBy will create the wants symlink for Tomcat (when the service is enabled), which will force Tomcat to start this service any time it's restarted.
The RemainAfterExit=yes is necessary for the ExecStop to work. If you only care about triggering something when you're service is started and don't care about when the service is stopped, you can set this to no and remove the ExecStop line.
Make the TimeoutStartSec long enough for whatever task you plan on running.
To get this service working, you then need to do the following:
# make the service executable
chmod 664 /usr/lib/systemd/system/tomcat-service-listener.service
# make Systemd aware of the new service
systemctl daemon-reload
# register the service so it's started/stopped with Tomcat
systemctl enable tomcat-service-listener.service
Now all you need script to trigger off the logic you want. In my case, I wanted to warmup some servers once Tomcat started so my /path/to/your/script.sh looks something like:
#!/bin/sh
SCRIPT_MODE="$1"
LOGFILE=/var/logs/myscript.log
log_message() {
local MESSAGE="$1"
echo "$(date '+%Y-%m-%d %H:%M:%S') $MESSAGE" >> "$LOGFILE"
return 0
}
warmup_server() {
local SERVER_ADDRESS="$1"
local SERVER_DESCRIPTION="$2"
log_message "Warming up $SERVER_DESCRIPTION..."
# we want to track the time it took to warm up the server
local START_TIME=$(date +%s)
# server restarts can take a while for all services to start, so we must retry long enough for all relevant services to start
HTTP_STATUS=$(curl --insecure --location --silent --show-error --fail --retry 60 --retry-delay 2 --retry-max-time 240 --output /dev/null --write-out "%{http_code}" '$SERVER_ADDRESS')
# we want to track the time it took to warm up the server
local TOTAL_STARTUP_TIME=$(($(date +%s)-$START_TIME))
log_message "$SERVER_DESCRIPTION started in $TOTAL_STARTUP_TIME seconds... (Status: $HTTP_STATUS)"
return 0
}
# monitor when Tomcat has stopped
if [ "$SCRIPT_MODE" == "stop" ]; then
log_message "Tomcat listener shutting down..."
exit 0
elif [ "$SCRIPT_MODE" == "start" ]; then
log_message "Tomcat listener started..."
fi
# servers to warm up
warmup_server 'https://127.0.0.1' 'Localhost #1'
warmup_server 'https://127.0.0.2' 'Localhost #2'
This seems to be working exactly as I want. The service starts up when the server is reboot and starting/stopping/restarting Tomcat fires off the expected events. Since it's independent of the Tomcat service, I can restart this warmup script if needed. It also doesn't delay the Tomcat startup time, since it is its own service, therefore running asynchronously like I wanted.
I have recently completed the Wiki web development tutorial (http://golang.org/doc/articles/wiki/). I had tons of fun and I would like to experiment more with the net/http package.
However, I noticed that when I run the wiki from a console, the wiki takes over the console. If I close the console terminal or stop the process with CTRL+Z then the server stops.
How can I get the server to run in the background? I think the term for that is running in a daemon.
I'm running this on Ubuntu 12.04. Thanks for any help.
Simple / Usable things first
If you want a start script without much effort (i.e. dealing with the process, just having it managed by the system), you could create a systemd service. See Greg's answer for a detailled description on how to do that.
Afterwards you can start the service with
systemctl start myserver
Previously I would have recommended trying xinetd or something similar for finer granuarlity regarding resource and permission management but systemd already covers that.
Using the shell
You could start your process like this:
nohup ./myexecutable &
The & tells the shell to start the command in the background, keeping it in the job list.
On some shells, the job is killed if the parent shell exits using the HANGUP signal.
To prevent this, you can launch your command using the nohup command, which discards the HANGUP signal.
However, this does not work, if the called process reconnects the HANGUP signal.
To be really sure, you need to remove the process from the shell's joblist.
For two well known shells this can be achieved as follows:
bash:
./myexecutable &
disown <pid>
zsh:
./myexecutable &!
Killing your background job
Normally, the shell prints the PID of the process, which then can be killed using the kill command, to stop the server. If your shell does not print the PID, you can get it using
echo $!
directly after execution. This prints the PID of the forked process.
You could use Supervisord to manage your process.
Ubuntu? Use upstart.
Create a file in /etc/init for your job, named your-service-name.conf
start on net-device-up
exec /path/to/file --option
You can use start your-service-name, as well as: stop, restart, status
This will configure your service using systemd, not a comprehensive tutorial but rather a quick jump-start of how this can be set up.
Content of your app.service file
[Unit]
Description=deploy-webhook service
After=network.target
[Service]
ExecStart=/usr/bin/go webhook.go
WorkingDirectory=/etc/deploy-webhook
User=app-svc
Group=app-svc
Restart=always
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=deploy-webhook-service
PrivateTmp=true
Environment=APP_PARAM_1=ParamA
Environment=APP_PARAM_2=ParamB
[Install]
WantedBy=multi-user.target
Starting the Service
sudo systemctl start deploy-webhook.service
Service Status
sudo systemctl status deploy-webhook.service
Logs
journalctl -u deploy-webhook -e
After you press ctrl+z (putting the current task to sleep) you can run the command bg in the terminal (stands for background) to let the latest task continue running in the background.
When you need to, run fg to get back to the task.
To get the same result, you can add to your command & at the end to start it in the background.
To add to Greg's answer:
To run the Go App as a service you need to create a new service unit file.
However, the App needs to know where Go is installed. The easiest way to lookup that location is by running this command:
which go
which gives you an output like this:
/usr/local/go/bin/go
With this piece of information, you can create the systemd service file. Create a file named providus-app.service in the /etc/systemd/system/ using the command below:
sudo touch /etc/systemd/system/providus-app.service
Next open the newly created file:
sudo nano /etc/systemd/system/providus-app.service
Paste the following configuration into your service file:
[Unit]
Description=Providus App Service
After=network.target
[Service]
Type=forking
User=deploy
Group=deploy
ExecStart=/usr/local/go/bin/go run main.go
WorkingDirectory=/home/deploy/providus-app
Restart=always
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=providus-app-service
PrivateTmp=true
[Install]
WantedBy=multi-user.target
When you are finished, save and close the file.
Next, reload the systemd daemon so that it knows about our service file:
sudo systemctl daemon-reload
Start the Providus App service by typing:
sudo systemctl restart providus-app
Double-check that it started without errors by typing:
sudo systemctl status providus-app
And then enable the Providus App service file so that Providus App automatically starts at boot, that is, it can start on its own whenever the server restarts:
sudo systemctl enable providus-app
This creates a multi-user.target symlink in /etc/systemd/system/multi-user.target.wants/providus-app.service for the /etc/systemd/system/providus-app.service file that you created.
To check logs:
sudo journalctl -u providus-app
What is the best way to deploy Node.js?
I have a Dreamhost VPS (that's what they call a VM), and I have been able to install Node.js and set up a proxy. This works great as long as I keep the SSH connection that I started node with open.
2016 answer: nearly every Linux distribution comes with systemd, which means forever, monit, PM2, etc. are no longer necessary - your OS already handles these tasks.
Make a myapp.service file (replacing 'myapp' with your app's name, obviously):
[Unit]
Description=My app
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
# Note Debian/Ubuntu uses 'nogroup', RHEL/Fedora uses 'nobody'
Group=nogroup
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target
Note if you're new to Unix: /var/www/myapp/app.js should have #!/usr/bin/env node on the very first line and have the executable mode turned on chmod +x myapp.js.
Copy your service file into the /etc/systemd/system folder.
Tell systemd about the new service with systemctl daemon-reload.
Start it with systemctl start myapp.
Enable it to run on boot with systemctl enable myapp.
See logs with journalctl -u myapp
This is taken from How we deploy node apps on Linux, 2018 edition, which also includes commands to generate an AWS/DigitalOcean/Azure CloudConfig to build Linux/node servers (including the .service file).
Use Forever. It runs Node.js programs in separate processes and restarts them if any dies.
Usage:
forever start example.js to start a process.
forever list to see list of all processes started by forever
forever stop example.js to stop the process, or forever stop 0 to stop the process with index 0 (as shown by forever list).
I've written about my deployment method here: Deploying node.js apps
In short:
Use git post-receive hook
Jake for the build tool
Upstart as a service wrapper for node
Monit to monitor and restart applications it they go down
nginx to route requests to different applications on the same server
pm2 does the tricks.
Features are: Monitoring, hot code reload, built-in load balancer, automatic startup script, and resurrect/dump processes.
You can use monit, forever, upstart or systemd to start your server.
You can use Varnish or HAProxy instead of Nginx (Nginx is known not to work with websockets).
As a quick and dirty solution you can use nohup node your_app.js & to prevent your app terminating with your server, but forever, monit and other proposed solutions are better.
I made an Upstart script currently used for my apps:
description "YOUR APP NAME"
author "Capy - http://ecapy.com"
env LOG_FILE=/var/log/node/miapp.log
env APP_DIR=/var/node/miapp
env APP=app.js
env PID_NAME=miapp.pid
env USER=www-data
env GROUP=www-data
env POST_START_MESSAGE_TO_LOG="miapp HAS BEEN STARTED."
env NODE_BIN=/usr/local/bin/node
env PID_PATH=/var/opt/node/run
env SERVER_ENV="production"
######################################################
start on runlevel [2345]
stop on runlevel [016]
respawn
respawn limit 99 5
pre-start script
mkdir -p $PID_PATH
mkdir -p /var/log/node
end script
script
export NODE_ENV=$SERVER_ENV
exec start-stop-daemon --start --chuid $USER:$GROUP --make-pidfile --pidfile $PID_PATH/$PID_NAME --chdir $APP_DIR --exec $NODE_BIN -- $APP >> $LOG_FILE 2>&1
end script
post-start script
echo $POST_START_MESSAGE_TO_LOG >> $LOG_FILE
end script
Customize all before #########, create a file in /etc/init/your-service.conf and paste it there.
Then you can:
start your-service
stop your-service
restart your-service
status your-service
I've written a pretty comprehensive guide to deploying Node.js, with example files:
Tutorial: How to Deploy Node.js Applications, With Examples
It covers things like http-proxy, SSL and Socket.IO.
Here's a longer article on solving this problem with systemd: http://savanne.be/articles/deploying-node-js-with-systemd/
Some things to keep in mind:
Who will start your process monitoring? Forever is a great tool, but it needs a monitoring tool to keep itself running. That's a bit silly, why not just use your init system?
Can you adequately monitor your processes?
Are you running multiple backends? If so, do you have provisions in place to prevent any of them from bringing down the others in terms of resource usage?
Will the service be needed all the time? If not, consider socket activation (see the article).
All of these things are easily done with systemd.
If you have root access you would better set up a daemon so that it runs safe and sound in the background. You can read how to do just that for Debian and Ubuntu in blog post Run Node.js as a Service on Ubuntu.
Forever will do the trick.
#Kevin: You should be able to kill processes fine. I would double check the documentation a bit. If you can reproduce the error it would be great to post it as an issue on GitHub.
Try this: http://www.technology-ebay.de/the-teams/mobile-de/blog/deploying-node-applications-with-capistrano-github-nginx-and-upstart.html
A great and detailed guide for deploying Node.js apps with Capistrano, Upstart and Nginx
As Box9 said, Forever is a good choice for production code. But it is also possible to keep a process going even if the SSH connection is closed from the client.
While not necessarily a good idea for production, this is very handy when in the middle of long debug sessions, or to follow the console output of lengthy processes, or whenever is useful to disconnect your SSH connection, but keep the terminal alive in the server to reconnect later (like starting the Node.js application at home and reconnecting to the console later at work to check how things are going).
Assuming that your server is a *nix box, you can use the screen command from the shell to do keep the process running even if the client SSH is closed. You can download/install screen from the web if not already installed (look for a package for your distribution if Linux, or use MacPorts if OS X).
It works as following:
When you first open the SSH connection, type 'screen' - this will start your screen session.
Start working as normal (i.e. start your Node.js application)
When you are done, close your terminal. Your server process(es) will continue running.
To reconnect to your console, ssh back to the server, login, and enter 'screen -r' to reconnect. Your old console context will pop back ready for you to resume using it.
To exit screen, while connected to the server, type 'exit' on the console prompt - that will drop you onto the regular shell.
You can have multiple screen sessions running concurrently like this if you need, and you can connect to any of it from any client. Read the documentation online for all the options.
Forever is a good option for keeping apps running (and it's npm installable as a module which is nice).
But for more serious 'deployment' -- things like remote management of deploying, restarting, running commands etc -- I would use capistrano with the node extension.
https://github.com/loopj/capistrano-node-deploy
https://paastor.com is a relatively new service that does the deploy for you, to a VPS or other server. There is a CLI to push code. Paastor has a free tier, at least it did at the time of posting this.
In your case you may use the upstart daemon. For a complete deployment solution, I may suggest capistrano. Two useful guides are How to setup Node.js env and How to deploy via capistrano + upstart.
Try node-deploy-server. It is a complex toolset for deploying an application onto your private servers. It is written in Node.js and uses npm for installation.