Deploying a cluster of containers in Azure - azure

I have a Docker application that works fine in my laptop on Windows using compose and starting multiple instances of a container as a Dask cluster.
The name of the service is "worker" and I start two container instances like so:
docker compose up --scale worker=2
I deployed the image on Azure and when I run docker compose (using the same command I used in Windows) only one container is started.
How to deploy a cluster of containers in Azure? Can I use docker compose or I need to have a different approach, such as deploying with templates or Kubernetes?
This is the docker-compose.yml file:
version: "3.0"
services:
web:
image: sofacr.azurecr.io/pablo:job2_v1
volumes:
- daskvol:/code/defaults_prediction
ports:
- "5000:5000"
environment:
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
working_dir: /code
entrypoint:
- /opt/conda/bin/waitress-serve
command:
- --port=5000
- defaults_prediction:app
scheduler:
image: sofacr.azurecr.io/pablo:job2_v1
ports:
- "8787:8787"
entrypoint:
- /opt/conda/bin/dask-scheduler
worker:
image: sofacr.azurecr.io/pablo:job2_v1
depends_on:
- scheduler
environment:
- PYTHONPATH=/code
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
volumes:
- daskvol:/code/defaults_prediction
- daskdatavol:/data
- daskmodelvol:/model
entrypoint:
- /opt/conda/bin/dask-worker
command:
- scheduler:8786
volumes:
daskvol:
driver: azure_file
driver_opts:
share_name: daskvol-0003
storage_account_name: sofstoraccount
daskdatavol:
driver: azure_file
driver_opts:
share_name: daskdatavol-0003
storage_account_name: sofstoraccount
daskmodelvol:
driver: azure_file
driver_opts:
share_name: daskmodelvol-0003
storage_account_name: sofstoraccount

What you need here is Azure Kubernetes Service or Azure Webapps for Containers. Both will take care of taking Docker images from ACR and distributing then across a fleet of machines.
Here is a decision tree to choose your compute service
Container Instances - small, fast, serverless container hosting service - usually nice for small container deployments, I tend to use it to spawn adhoc background jobs
AKS - large scale container deployment, big part here is the multi-container orchestration platform . Have a look at this example

I am also new to docker . But as far I read to orchestrate the containers or to scaleup the container generally use
the docker swarm or kubernetes. In Azure kuberenetes cluster is AKS.
docker compose up --scale worker=2
I have come across this issue of scaling container in this link
https://github.com/docker/compose/issues/3722
How to simply scale a docker-compose service and pass the index and count to each?
hope this might help.

Now ACI supports deploying from the docker-compose.yaml file, but it doesn't support scaling up to multiple replicas for the same container. Because ACI does not support port mapping and one port can only expose once and for one container. So you only can create multiple containers in a container group through the docker-compose. yaml file and each container have one replica.
If you want to have multiple replicas of one container, then I recommend you use the AKS, it's more suitable for your purpose.

Related

Connect to kafka running in Azure Container Instance from outside

I have a kafka instance running in Azure Container instance. I want to connect to it (send messages) from outside the container (from application running on external server/local computer or another container).
After searching the internet, I understand that we need to provide the external IpAddress to kafka listener which would be listening from outside to connect.
Eg: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://<ip-address>:9092
But since azure container instance gets ip address after it has spin up how can we connect in this case?
docker-compose.yaml
version: '3.9'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
KAFKA_JMX_PORT: 39999
volumes:
- ../zookeeper_data:/var/lib/zookeeper/data
- ../zookeeper_log:/var/lib/zookeeper/log
networks:
- app_net
#*************kafka***************
kafkaserver:
image: confluentinc/cp-kafka:7.0.1
container_name: kafkaserver
ports:
# To learn about configuring Kafka for access across networks see
# https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 49999
volumes:
- ../kafka_data:/var/lib/kafka/data
networks:
- app_net
networks:
app_net:
driver: bridge
You could create an EventHubs cluster with Kafka support instead...
But if you want to run Kafka in Docker, the Confluent image would need extended with your own Dockerfile that would inject your own shell script between these lines which would use some shell command to fetch the external listener defined at runtime.
e.g. Create aci-run file with this section
echo "===> Configuring for ACI networking ..."
/etc/confluent/docker/aci-override
echo "===> Configuring ..."
/etc/confluent/docker/configure
echo "===> Running preflight checks ... "
/etc/confluent/docker/ensure
(Might need source /etc/confluent/docker/aci-override ... I haven't tested this)
Create a Dockerfile like so and build/push to your registry
ARG CONFLUENT_VERSION=7.0.1
FROM confluentinc/cp-kafka:${CONFLUENT_VERSION}
COPY aci-override /etc/confluent/docker/aci-override
COPY aci-run /etc/confluent/docker/run # override this file
In aci-override
#!/bin/bash
ACI_IP=...
ACI_EXTERNAL_PORT=...
ACI_SERVICE_NAME=...
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${ACI_IP}:${ACI_EXTERNAL_PORT}
You can remove localhost listener since you want to connect externally.
Then update the YAML to run that image.
I know Heroku, Apache Mesos, Kubernetes, etc all set some PORT environment variable within the container when it starts. I'm not sure what that is for ACI, but if you can exec into a simple running container and run env, you might see it.

Azure WebApp and docker-compose in Linux

I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.

Docker Compose Failure in Azure App Service Environment (ASEv3)

Within an Azure App Service under an App Service Environment, I have configured a Docker Compose setup using the public Docker Hub as a registry source.
version: '3.7'
services:
web:
image: nginx:stable
restart: always
ports:
- '80:80'
Unfortunately this fails to deploy, and checking the logs, I see very little output:
2021-10-21T19:14:55.647Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:02.054Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:11.990Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:28.110Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:17:39.825Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
I'll note that moving to a single container setup (instead of a Docker Compose setup) works fine.
Below are few composing files to setup:
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 10s
You can also deploy and manage multi-container applications defined in Compose files to ACI using the docker compose command. All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. Name resolution between containers is achieved by writing service names in the /etc/hosts file that is shared automatically by all containers in the container group.
We have a good blog related to this, please refer to it, thanks to docs.dockers.
I had a similar problem, but with error: "Container logs could not be loaded: Containerlog resource of type text not found".
The strange thing is that deploying it as a Single Container type (instead of a Docker Compose type) it works fine.
Also, exactly the same docker-compose works perfectly if you use normal App Service (without Environment).
So I suspect it is an Azure bug.
P.s. https://learn.microsoft.com/en-us/answers/questions/955528/-application-error-when-using-docker-compose.html

ECS Fargate does not support bind mounts

I am trying to deploy a nodejs docker-compose app into aws ecs, here is how my docker compose file looks -
version: '3.8'
services:
sampleapp:
image: jeetawt/njs-backend
build:
context: .
ports:
- 3000:3000
environment:
- SERVER_PORT=3000
- CONNECTIONSTRING=mongodb://mongo:27017/isaac
volumes:
- ./:/app
command: npm start
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:
However when i try to run it using docker compose up after creating ecs context, it throws below error -
WARNING services.build: unsupported attribute
ECS Fargate does not support bind mounts from host: incompatible attribute
I am not specifying any where that I would like to use Fargate for this. Is there any way I can still deploy the application using ec2 instead of Fargate?
Default mode is Fargate. You presumably have not specified an ecs cluster with ec2 instances in your run command.
Your docker compose has a bind mount so your task would need to get deployed to an instance where the mount would work.
This example discusses deploying to an ec2 backed cluster.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html
Fargate is the default and there is no way to tell it that you want to deploy on EC2 instead. There are however situations where we have to deploy on EC2 when Fargate can't provide the required features (e.g. GPUs).
If you really really need to use bind mounts and need an EC2 instance you may use this trick (I haven't done it so I am basically brainstorming here):
configure your task to use a GPU (see examples here)
Convert your compose using docker compose convert
Manually edit the CFN template to use a different instance type (to avoid deploying a GPU based instance with its associated price)
Deploy the resulting CFN template.
You may even be able to automate this with some sed circus if you really need to.
As I said, I have not tried it and I am not sure how viable this could be. But it wouldn't be too complex I guess.

Deploying via Docker Compose to Azure App Service with multiple containers from different sources

I have a docker-compose.yml file which is created from a build step in Azure Devops. The build step works well and I can see how the docker-compose.yml file is produced. That makes sense to me.
However, it is looking for a normal docker image to run one of the services and the other service is one I've created and am hosting in my Azure Container Registry.
The docker compose file looks like this:
networks:
my-network:
external: true
name: my-network
services:
clamav:
image: mkodockx/docker-clamav#sha256:b90929eebf08b6c3c0e2104f3f6d558549612611f0be82c2c9b107f01c62a759
networks:
my-network: {}
ports:
- published: 3310
target: 3310
super-duper-service:
build:
context: .
dockerfile: super-duper-service/Dockerfile
image: xxxxxx.azurecr.io/superduperservice#sha256:ec3dd010ea02025c23b336dc7abeee17725a3b219e303d73768c2145de710432
networks:
my-network: {}
ports:
- published: 80
target: 80
- published: 443
target: 443
version: '3.4'
When I put this into an Azure App Service using the Docker Compose tab, I have to select an image tab - either Azure Container Registry or Docker Hub - I'm guessing the former because I am connected to that.
When I start the service, my logs say:
2020-12-04T14:11:38.175Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.531Z INFO - Starting multi-container app..
2020-12-04T14:23:28.531Z ERROR - Exception in multi-container config parsing: Exception: System.NullReferenceException, Msg: Object reference not set to an instance of an object.
2020-12-04T14:23:28.532Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.534Z INFO - Stopping site ingeniuus-antivirus because it failed during startup.
It's not very helpful, and I don't think there's anything wrong with that docker-compose.yml file.
If I try to deploy ONLY the service from the Azure Container registry, it deploys, but doesn't deploy the other service.
Does anyone know why the service doesn't start?
Well, there are two problems I find in your docker-compose file for the Azure Web App.
One problem is that Azure Web App only supports configuring one image repository in the docker-compose file. It means you only can configure the Docker Hub or ACR, not both.
Another problem is that Azure Web App does not support the build option in the docker-compose file. See the details here.
According to all the above, I suggest you can create all your custom images and push them to the ACR and use the ACR only.

Resources