I am trying to deploy a nodejs docker-compose app into aws ecs, here is how my docker compose file looks -
version: '3.8'
services:
sampleapp:
image: jeetawt/njs-backend
build:
context: .
ports:
- 3000:3000
environment:
- SERVER_PORT=3000
- CONNECTIONSTRING=mongodb://mongo:27017/isaac
volumes:
- ./:/app
command: npm start
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:
However when i try to run it using docker compose up after creating ecs context, it throws below error -
WARNING services.build: unsupported attribute
ECS Fargate does not support bind mounts from host: incompatible attribute
I am not specifying any where that I would like to use Fargate for this. Is there any way I can still deploy the application using ec2 instead of Fargate?
Default mode is Fargate. You presumably have not specified an ecs cluster with ec2 instances in your run command.
Your docker compose has a bind mount so your task would need to get deployed to an instance where the mount would work.
This example discusses deploying to an ec2 backed cluster.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html
Fargate is the default and there is no way to tell it that you want to deploy on EC2 instead. There are however situations where we have to deploy on EC2 when Fargate can't provide the required features (e.g. GPUs).
If you really really need to use bind mounts and need an EC2 instance you may use this trick (I haven't done it so I am basically brainstorming here):
configure your task to use a GPU (see examples here)
Convert your compose using docker compose convert
Manually edit the CFN template to use a different instance type (to avoid deploying a GPU based instance with its associated price)
Deploy the resulting CFN template.
You may even be able to automate this with some sed circus if you really need to.
As I said, I have not tried it and I am not sure how viable this could be. But it wouldn't be too complex I guess.
Related
Within an Azure App Service under an App Service Environment, I have configured a Docker Compose setup using the public Docker Hub as a registry source.
version: '3.7'
services:
web:
image: nginx:stable
restart: always
ports:
- '80:80'
Unfortunately this fails to deploy, and checking the logs, I see very little output:
2021-10-21T19:14:55.647Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:02.054Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:11.990Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:28.110Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:17:39.825Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
I'll note that moving to a single container setup (instead of a Docker Compose setup) works fine.
Below are few composing files to setup:
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 10s
You can also deploy and manage multi-container applications defined in Compose files to ACI using the docker compose command. All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. Name resolution between containers is achieved by writing service names in the /etc/hosts file that is shared automatically by all containers in the container group.
We have a good blog related to this, please refer to it, thanks to docs.dockers.
I had a similar problem, but with error: "Container logs could not be loaded: Containerlog resource of type text not found".
The strange thing is that deploying it as a Single Container type (instead of a Docker Compose type) it works fine.
Also, exactly the same docker-compose works perfectly if you use normal App Service (without Environment).
So I suspect it is an Azure bug.
P.s. https://learn.microsoft.com/en-us/answers/questions/955528/-application-error-when-using-docker-compose.html
I have clone TheSpaghettiDetective (https://github.com/TheSpaghettiDetective/TheSpaghettiDetective) repository and then use the docker-compose.yml to build it and it works great on my local machine. Now I want to push it to Azure Container Instances in a multi-container group, but can't get it working.
I tried using this tutorial from Microsoft, but this project is a lot more complex.
The docker-compose file creates 4 containers that run, I couldn't figure out push them to Azure Container Registry with the tutorial, but was able to do so easily with the docker extension in VS Code.
Then when I tried deploy the images, I was able to get the docker context setup but the images wouldn't deploy. I think because they rely on the files I downloaded from github, to so I think I need to setup a file share in azure???
Where do I go from here? Is there no easy way to clone the repository into azure and use docker-compose up like I'm used to?
The problem that the images don't deploy is the docker-compose.yml file does not set the image option. It only set the build option to build the images. But when the images are pushed into the ACR. You don't need to build the images again, just need to pull the images with the image option. See the example in the link you found:
version: '3'
services:
azure-vote-back:
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
container_name: azure-vote-back
environment:
ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
You can find that the docker-compose.yml file configures the image to set the image name and use the build option to build the images. So try to add the image option in the docker-compose.yml file.
I have a Docker application that works fine in my laptop on Windows using compose and starting multiple instances of a container as a Dask cluster.
The name of the service is "worker" and I start two container instances like so:
docker compose up --scale worker=2
I deployed the image on Azure and when I run docker compose (using the same command I used in Windows) only one container is started.
How to deploy a cluster of containers in Azure? Can I use docker compose or I need to have a different approach, such as deploying with templates or Kubernetes?
This is the docker-compose.yml file:
version: "3.0"
services:
web:
image: sofacr.azurecr.io/pablo:job2_v1
volumes:
- daskvol:/code/defaults_prediction
ports:
- "5000:5000"
environment:
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
working_dir: /code
entrypoint:
- /opt/conda/bin/waitress-serve
command:
- --port=5000
- defaults_prediction:app
scheduler:
image: sofacr.azurecr.io/pablo:job2_v1
ports:
- "8787:8787"
entrypoint:
- /opt/conda/bin/dask-scheduler
worker:
image: sofacr.azurecr.io/pablo:job2_v1
depends_on:
- scheduler
environment:
- PYTHONPATH=/code
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
volumes:
- daskvol:/code/defaults_prediction
- daskdatavol:/data
- daskmodelvol:/model
entrypoint:
- /opt/conda/bin/dask-worker
command:
- scheduler:8786
volumes:
daskvol:
driver: azure_file
driver_opts:
share_name: daskvol-0003
storage_account_name: sofstoraccount
daskdatavol:
driver: azure_file
driver_opts:
share_name: daskdatavol-0003
storage_account_name: sofstoraccount
daskmodelvol:
driver: azure_file
driver_opts:
share_name: daskmodelvol-0003
storage_account_name: sofstoraccount
What you need here is Azure Kubernetes Service or Azure Webapps for Containers. Both will take care of taking Docker images from ACR and distributing then across a fleet of machines.
Here is a decision tree to choose your compute service
Container Instances - small, fast, serverless container hosting service - usually nice for small container deployments, I tend to use it to spawn adhoc background jobs
AKS - large scale container deployment, big part here is the multi-container orchestration platform . Have a look at this example
I am also new to docker . But as far I read to orchestrate the containers or to scaleup the container generally use
the docker swarm or kubernetes. In Azure kuberenetes cluster is AKS.
docker compose up --scale worker=2
I have come across this issue of scaling container in this link
https://github.com/docker/compose/issues/3722
How to simply scale a docker-compose service and pass the index and count to each?
hope this might help.
Now ACI supports deploying from the docker-compose.yaml file, but it doesn't support scaling up to multiple replicas for the same container. Because ACI does not support port mapping and one port can only expose once and for one container. So you only can create multiple containers in a container group through the docker-compose. yaml file and each container have one replica.
If you want to have multiple replicas of one container, then I recommend you use the AKS, it's more suitable for your purpose.
My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/
I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.
With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.