Start docker-compose automatically on EC2 startup - linux

I have a linux AMI 2 AWS instance with some services orchestrated via docker-compose, and I am using docker-compose up or docker-compose start commands to start them all. Now I am in the process to start/stop my ec2 instance automatically every day, but once it is started, I want to run some ssh to be able to change to the directory where docker-compose.yml file is, and then start it.
something like:
#!
cd /mydirectory
docker-compose start
How can I achieve that?
Thanks

I would recommend using cron for this as it is easy. Most of the corn supports non-standard instructions like #daily, #weekly, #monthly, #reboot.
You can put this either in a shell script and schedule that in crontab as #reboot /path/to/shell/script
or
you can specify the docker-compose file using the absolute path and directly schedule it in crontab as #reboot docker-compose -f /path/to/docker-compose.yml start
Other possibilities:
Create a systemd service and enable it. All the enabled systems services will be started on powering.(difficulty: medium)
Put scripts under init.d and link it to rc*.d directory. These scripts are also started based on the priority.(difficulty: medium)
Bonus:
If you specify restart policy in the docker-compose file for a container it will autostart if you reboot or switch on the server. Reference

Consider using Amazon Elastic Container Service (Amazon ECS) which can orchestrate docker containers and take care of your underlying OSes.

Simply run the following command once on the host:
sudo systemctl enable docker
Afterwards the restart: always inside of your service in docker-compose.yml should start working.

Related

Execute script located inside linked volume on host environment [duplicate]

Can I run docker command on host? I installed aws inside my docker container, now can I somehow use aws command on host (that under the hood will use docker container's aws)?
My situation is like that: I have database backups on production host. now I have Jenkins cron job that will take sql file from db container and take it into server folder. Now I also want jenkins to upload this backup file on AWS storage, but on host I have no aws installed, also I don't want to install anything except docker on my host, so I think aws should be installed inside container.
You can't directly do this. Docker containers and images have isolated filesystems, and the host and containers can't directly access each others' filesystems and binaries.
In theory you could write a shell script that wrapped docker run, name it aws, and put it in your $PATH
#!/bin/sh
exec docker run --rm -it awscli aws "$#"
but this doesn't scale well, requires you to have root-level permissions on the host, and you won't be able to access files on the host (like ~/.aws/config) or environment variables (like $AWS_ACCESS_KEY_ID) with additional setup.
You can just install software on your host instead, and it will work normally. There's no requirement to use Docker for absolutely everything.

How to run a docker container as a windows service

I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Automatically Start Services in Docker Container

I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash

How can I automatically start a node.js application in Amazon Linux AMI on aws?

Is there a brief guide to explain how to start up a application when the instance starts up and running? If it were one of the services installed through yum then I guess I can use /sbin/chkconfig to add it to the service. (To make it sure, is it correct?)
However, I just want to run the program which was not installed through yum. To run node.js program, I will have to run script sudo node app.js at home directory whenever the system boots up.
I am not used to Amazon Linux AMI so I am having little trouble finding a 'right' way to run some script automatically on every boot.
Is there an elegant way to do this?
One way is to create an upstart job. That way your app will start once Linux loads, will restart automatically if it crashes, and you can start / stop / restart it by sudo start yourapp / sudo stop yourapp / sudo restart yourapp.
Here are beginning steps:
1) Install upstart utility (may be pre-installed if you use a standard Amazon Linux AMI):
sudo yum install upstart
For Ubuntu:
sudo apt-get install upstart
2) Create upstart script for your node app:
in /etc/init add file yourappname.conf with the following lines of code:
#!upstart
description "your app name"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
env NODE_ENV=development
# Warning: this runs node as root user, which is a security risk
# in many scenarios, but upstart-ing a process as a non-root user
# is outside the scope of this question
exec node /path_to_your_app/app.js >> /var/log/yourappname.log 2>&1
3) start your app by sudo start yourappname
You can use forever-service for provisioning node script as a service and automatically starting during boots. Following commands will do the needful,
npm install -g forever-service
forever-service install test
This will provision app.js in the current directory as a service via forever. The service will automatically restart every time system is restarted. Also when stopped it will attempt a graceful stop. This script provisions the logrotate script as well.
Github url: https://github.com/zapty/forever-service
As of now forever-service supports Amazon Linux, CentOS, Redhat support for other Linux distro, Mac and Windows are in works..
NOTE: I am the author of forever-service.
Quick solution for you would be to start your app from /etc/rc.local ; just add your command there.
But if you want to go the elegant way, you'll have to package your application in a rpm file,
have a startup script that goes in /etc/rc.d so that you can use chkconfig on your app, then install the rpm on your instance.
Maybe this or this help. (or just google for "creating rpm packages")
My Amazon Linux instance runs on Ubuntu, and I used systemd to set it up.
First you need to create a <servicename>.service file. (in my case cloudyleela.service)
sudo nano /lib/systemd/system/cloudyleela.service
Type the following in this file:
[Unit]
Description=cloudy leela
Documentation=http://documentation.domain.com
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/server.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
In this application the node application is started. You will need a full path here. I configured that the application should simply restart if something goes wrong. The instances that Amazon uses have no passwords for their users by default.
Reload the file from disk, and then you can start your service. You need to enable it to make it active as a service, which automatically launches at startup.
ubuntu#ip-172-31-21-195:~$ sudo systemctl daemon-reload
ubuntu#ip-172-31-21-195:~$ sudo systemctl start cloudyleela
ubuntu#ip-172-31-21-195:~$ sudo systemctl enable cloudyleela
Created symlink /etc/systemd/system/multi-user.target.wants/cloudyleela.service → /lib/systemd/system/cloudyleela.service.
ubuntu#ip-172-31-21-195:~$
A great systemd for node.js tutorial is available here.
If you run a webserver:
You probably will have some issues running your webserver on port 80. And the easiest solution, is actually to run your webserver on a different port (e.g. 4200) and then to redirect that port to port 80. You can accomplish this with the following command:
sudo iptables -t nat -A PREROUTING -i -p tcp --dport 80 -j REDIRECT --to-port 4200
Unfortunately, this is not persistent, so you have to repeat it whenever your server restarts. A better approach is to also include this command in our service script:
ExecStartPre to add the port forwarding
ExecStopPost to remove the port forwarding
PermissionStartOnly to do this with sudo power
So, something like this:
[Service]
...
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
Don't forget to reload and restart your service:
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl daemon-reload
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl stop cloudyleela
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl start cloudyleela
[ec2-user#ip-172-31-39-212 system]$
For microservices (update on Dec 2020)
The previously mentioned solution gives a lot of flexibility, but it does take some time to set it up. And for each additional application, you need to go through this entire process again. By the time you'll be installing your 5th node application, you'll certainly start wondering: "there has to be a shortcut".
The advantage of PM2 is that it's just 1 service to install. Next it's PM2 which manages the actual applications.
Even the initial setup of PM2 is easy, because it automatically installs the pm2 service for you.
npm install pm2 -g
And adding new services is even easier:
pm2 start index.js --name "foo"`.
When everything's up and running, you can save your setup, to have it automatically start on reboot.
pm2 save
If you want an overview of all your running node applications,
you can run pm2 list
And PM2 also offers an online (webbased) dashboard to monitor your application remotely. You may need a license to access some of the dashboard functionality though (which is a bit over-priced imho).
You can create a script that can start and stop your app and place it in /etc/init.d; make the script adhere to chkconfig's conventions (below), and then use chkconfig to set it to start when other services are started.
You can pick an existing script from /etc/init.d to use as an example; this article describes the requirements, which are basically:
An executable script that identifies the shell needed (i.e., #!/bin/bash)
A comment of the form # chkconfig: where is often 345, startprio indicates where in the order of services to start, and stopprio is where in the order of services to stop. I generally pick a similar service that already exists and use that as a guide for these values (i.e., if you have a web-related service, start at the same levels as httpd, with similar start and stop priorities).
Once your script is set up, you can use
chkconfig --add yourscript
chkconfig yourscript on
and you should be good to go. (Some distros may require you to manually symlink to the script to /etc/init.d/rc.d, but I believe your AWS distro will do that for you when you enable the script.
Use Elastic Beanstalk :) Provides support for auto-scaling, SSL termination, blue/green deployments, etc
If you want the salty sysadmin way for a RedHat based linux distro (Amazon Linux is a flavor of RedHat), learn systemd, as mentioned by #bvdb in the answer above:
https://en.wikipedia.org/wiki/Systemd
Set everything up as described on an EC2 instance, snapshot a custom AMI, and use this custom AMI as your base for EC2 instances hosting your apps. This way you don't have to go through all that setup multiple times. You'll probably want to get acquainted with load balancers too, if you are running in a production environment with uptime requirements.
Or, yes, as mentioned by #bvdb, you could also use pm2 to interface with systemd. Though I don't think pm2 helps with running your app across multiple EC2 instances, which is definitely recommended for production environments with uptime requirements.
All of which is a very steep learning curve. Since the OP seemed to be new to all this, Elastic Beanstalk, Google App Engine, and others are a great way to get code running in the cloud without all that.
These days I dev in TypeScript, deploying to serverless function execution in the cloud for most things, and don't have to think about package installs or app startup at all.
You can use screen. Run crontab -e and add this line:
#reboot screen -d -m bash -c "cd /home/user/yourapp/; node app"
Have been using forever on AWS and it does a good job. Install using
[sudo] npm install forever -g
To add an application use
forever start path_to_application
and to stop the application use
forever stop path_to_application
This is a useful article that helped me with setting it up.

Resources