Running commands upon creation of instances using AWS Elastic Beanstalk - linux

I've been looking at various methods to run commands upon creation of EC2 instances using Elastic Beanstalk on AWS. I've been given different methods to do this through AWS Tech Support, including life cycle hooks, custom AMI's, and .ebextensions. I've been having issues getting the first 2 methods (life cycle hooks and custom AMIs) to work with EB.
I'm currently using .ebextensions to run commands upon deploy, but not sure if there's a way to run a set of commands upon creation only instead of every time I deploy code. For instance, I have a file .ebextensions/03-commands.config that contains the following code:
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
However, I only want this code to run upon instance creation, not every time I deploy, as it currently does. Does anyone know a way to accomplish this?
Thanks in advance!

I would recommend creating an idempotent script in your application that leaves a marker file on the instance in some location say /var/myapp/marker using something like mkdir -p /var/myapp-state/; touch /var/myapp-state/marker on successful execution. That way in your script you can check that if the marker file exists you can make your script a no-op.
Then you can call your script from container commands but it will be a no-op everytime because on first successful execution it will create the marker file and subsequent executions will be no-ops.

Create a custom AMI. This way you can setup your instances whoever you want and they will launch faster
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html

As I see from you question, you're using: container_commands, that is means you are using Elastic Beanstalk with Docker. Right? In this case I would recommended to read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html.
The idea is following, that you can create own Dockerfile, where you can specify all commands that you need to build a docker container, for example to install all dependencies.
I would recommended to use .ebextensions for the Elastic Beanstalk customization and configuration, for example to specify ELB or RDS configuration. In the Dockerfile make sense to specify all commands, that you need to build a container for your application, that includes setup of the web server, dependencies etc.
With this approach, Elastic Beanstalk will build a container, that each time when you do deploy, it execute a docker instance with deployed source code.

There is a simple option leader_only: true you need to use as per current AWS Elasticbeanstalk configurations, You simply need to add this under
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
leader_only: true
This is the link as per AWS
AWS Elasticbeanstalk container command option

Related

Is there a way to specify custom arguments to docker run command executing inside Azure functions

I'm trying to deploy a custom image to an azure function and I have a requirement of modifying /etc/hosts file inside the container.
I've tried giving --add-host argument at the docker build stage but it doesn't help. And as it is an azure function, It'll run the docker run command by itself without manual intervention.
So, just wanted to know if there's a possibility of adding --add-host argument to docker run command through Azure function's configuration.
I'm afraid it's impossible to add --add-host argument to the docker run command. That's the function runtime and it's deployed by the Azure platform. You cannot change any parameters in it.
If you want to modify the /etc/hosts file, there are two ways to do it as I know. One is that change it directly when you create the custom image. Another one is that enable the SSH server in the custom image and then change the /etc/hosts file when the function runs well from your custom image via the SSH connection.

What is the best way pull updated changes into the Docker containers that already deployed?

I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.

Deploying an nodejs app to EC2 and update methods

So I have a node js app I would like to deploy to EC2.
I'm planning on creating multiple instances of it and put it beyond Nginx for load balancing.
I know I can use AWS Beanstalk but I think it's over provisioning stuff I don't need.
My question is about the app update process. I thought of two options.
The first one is to create a bare git repository on the EC2 and every time I push some changes, it will hook into the after receive event, create new instances of the app and update Nginx to switch to the new instances.
Another option is to work with Amazon ECR and containers. Every time I update my app image at ECR, it will send an event to the EC2 machine (I'm not sure it is even possiable) to create new instances of the app and again, tell the Nginx to switch.
Which one do you think is preferred?
Here is the deployment method we used
1)Created git bare repo in ec2 server and its tracked with production branch.
2)in the post-receive hook
#!/bin/sh
git --work-tree=/var/www/domain.com --git-dir=/var/repo/site.git checkout -f
cd /var/www/domain.com && npm install && forever restart app.js
3)In the nginx configuration
{
proxy_pass:https://localhost:3000
}
Note:
You can customise post hook to check if its first deployment then run npm install otherwise run npm update.
I hope this will help to solve your issues

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Creating an environmental variable within Docker container when starting up

How would I get the ip address of a mongo container and set it as environmental variable when creating a node image?
I've been running into an issue with conflicting tech stacks: keystone.js, forever, and docker. My problem is that I need to set up an environmental variable for a separate mongo container which would seem easy to do by running a shell script when I start up the container that includes:
export MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
The issue comes with starting the keystone app. Normally I would place it in the same script and call it with docker run but this project we need to use forever. Command would be forever keystone.js. There is an issue with this in that the docker container drops immediatly. If I start the app with a simple forever start rather than going to the script the app starts up fine but the env variable needed is not set. It's hard coded in the docker image but of course this is not a good solution as the ip of the mongodb may change in the future and then on the node container restart it would not be able to find the db. See a couple of possibilities:
Switch to just using a node keystone.js, would loose the functionality of the forever start (which will restart the app if there is a critical failure). Tested and this works but maybe someone knows a way to make forever work or a viable alternate?
Find a way to set the above export from the docker file when creating the image. Haven't been able to get this to work but I do know the name that the mongdb is going to use no matter what if that helps
Any help is most appreciated.
The best way is to use docker link this provides you a hostname + your environmental variables.
docker run ... --link mongodb:mongodb ..
Also you can use the command line option from run
docker run -e MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
An Option for dynamic dns would be SkyDNS + SkyDock.

Resources