I would like to link two Docker containers deployed on Azure (ACS).
I have a container running the api server made in NodeJs and another container running Mongo.
I'd like to use something like "--link mymongodb" as I do on my pc, but there is no such parameter in az container create.
To create the containers I use this syntax:
az container create --name my-app --image myprivateregistry/my-app --resource-group MyResourceGroup --ports 80 --ip-address public
Probably I need to create a Virtual Network?
Could you point me to the right direction please?
I think you are searching the features like docker compose on Azure. If you want to use the Azure Container Instance, you should take a look at Deploy a multi-container container group with YAML or with Azure Template. It will help you to create multi-container in a container group and the containers can connect to each other.
In addition, you can try with Azure Kubernetes Service, maybe it also can help you. If you need more help please give me the message.
You will need to specify both images (your app and mongo) in an Azure yml file. It looks like a docker compose yml file, but it isn't.
Assuming your node.js app runs on port 3000, this could be a yml configuration for Azure container services:
apiVersion: 2018-06-01
location: westeurope
name: my-app-with-mongo
properties:
containers:
- name: mongodb
properties:
image: mongo
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
- name: my-app
properties:
image: myprivateregistry/my-app
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 3000
osType: Linux
ipAddress:
type: Public
dnsNameLabel: my-app
ports:
- protocol: tcp
port: '3000'
imageRegistryCredentials:
- server: myprivateregistry.azurecr.io
username: username-for-myprivateregistry
password: password-for-myprivateregistry
tags: null
type: Microsoft.ContainerInstance/containerGroups
Simply start it with
az container create --resource-group MyResourceGroup --file azure-container-group.yml
You can then access your mongo database on localhost:27017 as all containers run on the same host:
Azure Container Instances supports the deployment of multiple
containers onto a single host by using a container group.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-multi-container-yaml
Also mind the order of the containers you specify in the yml file. You will want to start mongo first, then you node.js app as it probably wants to connect to mongo on start.
Related
I have a Docker application that works fine in my laptop on Windows using compose and starting multiple instances of a container as a Dask cluster.
The name of the service is "worker" and I start two container instances like so:
docker compose up --scale worker=2
I deployed the image on Azure and when I run docker compose (using the same command I used in Windows) only one container is started.
How to deploy a cluster of containers in Azure? Can I use docker compose or I need to have a different approach, such as deploying with templates or Kubernetes?
This is the docker-compose.yml file:
version: "3.0"
services:
web:
image: sofacr.azurecr.io/pablo:job2_v1
volumes:
- daskvol:/code/defaults_prediction
ports:
- "5000:5000"
environment:
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
working_dir: /code
entrypoint:
- /opt/conda/bin/waitress-serve
command:
- --port=5000
- defaults_prediction:app
scheduler:
image: sofacr.azurecr.io/pablo:job2_v1
ports:
- "8787:8787"
entrypoint:
- /opt/conda/bin/dask-scheduler
worker:
image: sofacr.azurecr.io/pablo:job2_v1
depends_on:
- scheduler
environment:
- PYTHONPATH=/code
- SCHEDULER_ADDRESS=scheduler
- SCHEDULER_PORT=8786
volumes:
- daskvol:/code/defaults_prediction
- daskdatavol:/data
- daskmodelvol:/model
entrypoint:
- /opt/conda/bin/dask-worker
command:
- scheduler:8786
volumes:
daskvol:
driver: azure_file
driver_opts:
share_name: daskvol-0003
storage_account_name: sofstoraccount
daskdatavol:
driver: azure_file
driver_opts:
share_name: daskdatavol-0003
storage_account_name: sofstoraccount
daskmodelvol:
driver: azure_file
driver_opts:
share_name: daskmodelvol-0003
storage_account_name: sofstoraccount
What you need here is Azure Kubernetes Service or Azure Webapps for Containers. Both will take care of taking Docker images from ACR and distributing then across a fleet of machines.
Here is a decision tree to choose your compute service
Container Instances - small, fast, serverless container hosting service - usually nice for small container deployments, I tend to use it to spawn adhoc background jobs
AKS - large scale container deployment, big part here is the multi-container orchestration platform . Have a look at this example
I am also new to docker . But as far I read to orchestrate the containers or to scaleup the container generally use
the docker swarm or kubernetes. In Azure kuberenetes cluster is AKS.
docker compose up --scale worker=2
I have come across this issue of scaling container in this link
https://github.com/docker/compose/issues/3722
How to simply scale a docker-compose service and pass the index and count to each?
hope this might help.
Now ACI supports deploying from the docker-compose.yaml file, but it doesn't support scaling up to multiple replicas for the same container. Because ACI does not support port mapping and one port can only expose once and for one container. So you only can create multiple containers in a container group through the docker-compose. yaml file and each container have one replica.
If you want to have multiple replicas of one container, then I recommend you use the AKS, it's more suitable for your purpose.
From Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we execute az container create command we can login successfully to our private registry (e.g fa-docker-snapshot-local.docker.comp.dev on JFrog Artifactory ) after entering password and we can docker pull it as well
docker login fa-docker-snapshot-local.docker.comp.dev -u svc-faselect
Login succeeded
So we can pull it successfully and the image path is the same like when doing manually docker pull:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
We have YAML file for deploy, and trying to create container using the az command from the SAME server. In the YAML file we have set up the same registry information: server, username and password and the same image
az container create --resource-group FRONT-SELECT-NA2 --file ads-azure.yaml
When we try to execute this command, it takes for 30 minutes and after that message is displayed: "Deployment failed. Operation failed with status 200: Resource State Failed"
Full Yaml:
apiVersion: '2019-12-01'
location: eastus2
name: ads-test-group
properties:
containers:
- name: front-arena-ads-test
properties:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
environmentVariables:
- name: 'DBTYPE'
value: 'odbc'
command:
- /opt/front/arena/sbin/ads_start
- ads_start
- '-unicode'
- '-db_server test01'
- '-db_name HEDGE2_ADM_Test1'
- '-db_user sqldbadmin'
- '-db_password pass'
- '-db_client_user HEDGE2_ADM_Test1'
- '-db_client_password Password55'
ports:
- port: 9000
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 4
volumeMounts:
- mountPath: /opt/front/arena/host
name: ads-filesharevolume
imageRegistryCredentials: # Credentials to pull a private image
- server: fa-docker-snapshot-local.docker.comp.dev
username: svcacct-faselect
password: test
ipAddress:
type: Private
ports:
- protocol: tcp
port: '9000'
volumes:
- name: ads-filesharevolume
azureFile:
sharename: azurecontainershare
storageAccountName: frontarenastorage
storageAccountKey: kdUDK97MEB308N=
networkProfile:
id: /subscriptions/746feu-1537-1007-b705-0f895fc0f7ea/resourceGroups/SELECT-NA2/providers/Microsoft.Network/networkProfiles/fa-aci-test-networkProfile
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Can you please help us why this error occurs?
Thank you
According to my knowledge, there is nothing wrong with your YAML file, I only can give you some possible reasons.
Make sure the configurations are all right, the server URL, username, and password, also include the image name and tag;
Change the port from '9000' into 9000``, I mean remove the double quotes;
Take a look at the Note, maybe the mount volume makes a crash to the container. Then you need to mount the file share to a new folder, I mean the new folder that does not exist before.
I have a docker-compose.yml file which is created from a build step in Azure Devops. The build step works well and I can see how the docker-compose.yml file is produced. That makes sense to me.
However, it is looking for a normal docker image to run one of the services and the other service is one I've created and am hosting in my Azure Container Registry.
The docker compose file looks like this:
networks:
my-network:
external: true
name: my-network
services:
clamav:
image: mkodockx/docker-clamav#sha256:b90929eebf08b6c3c0e2104f3f6d558549612611f0be82c2c9b107f01c62a759
networks:
my-network: {}
ports:
- published: 3310
target: 3310
super-duper-service:
build:
context: .
dockerfile: super-duper-service/Dockerfile
image: xxxxxx.azurecr.io/superduperservice#sha256:ec3dd010ea02025c23b336dc7abeee17725a3b219e303d73768c2145de710432
networks:
my-network: {}
ports:
- published: 80
target: 80
- published: 443
target: 443
version: '3.4'
When I put this into an Azure App Service using the Docker Compose tab, I have to select an image tab - either Azure Container Registry or Docker Hub - I'm guessing the former because I am connected to that.
When I start the service, my logs say:
2020-12-04T14:11:38.175Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.531Z INFO - Starting multi-container app..
2020-12-04T14:23:28.531Z ERROR - Exception in multi-container config parsing: Exception: System.NullReferenceException, Msg: Object reference not set to an instance of an object.
2020-12-04T14:23:28.532Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.534Z INFO - Stopping site ingeniuus-antivirus because it failed during startup.
It's not very helpful, and I don't think there's anything wrong with that docker-compose.yml file.
If I try to deploy ONLY the service from the Azure Container registry, it deploys, but doesn't deploy the other service.
Does anyone know why the service doesn't start?
Well, there are two problems I find in your docker-compose file for the Azure Web App.
One problem is that Azure Web App only supports configuring one image repository in the docker-compose file. It means you only can configure the Docker Hub or ACR, not both.
Another problem is that Azure Web App does not support the build option in the docker-compose file. See the details here.
According to all the above, I suggest you can create all your custom images and push them to the ACR and use the ACR only.
I have a tunnel created between my azure subscription and my on-prem servers. ON prem we have an artifactory server that is housing all of our docker images. For all internal servers we have a company wide CA trust and all certs are generated from this.
However, when I try to deploy something to aks and reference this docker registry. I am getting a cert error because the nodes themselves do not trust the "in house" self signed cert.
Is there anyway to get the root CA chain added to the nodes? Or a way to tell the docker daemon on the aks nodes this is an insecure registry?
Not one hundred percent sure, but you can try to use the docker config to create the secret for image pull, the command like this:
cat ~/.docker/config.json | base64
Then create the secret like this:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
Use this secret in your deployment or pod as the value of imagePullSecrets. For more details, see Using a private Docker Registry with Kubernetes.
For the beginning I would recommend you to use curl to check connection between your azure cluster and on prem server.
Please use curl and curl -k and check if they both works(-k allow connections to SSL sites without certs, I assume it won't work, what means You don't have on prem certs on azure cluster)
If curl -k won't work then you need to copy and add certs from on prem to azure cluster.
Links which should help you do that
https://docs.docker.com/ee/enable-client-certificate-authentication/
https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate
And found some informations about doing that with docker daemon
https://docs.docker.com/registry/insecure/
I hope it will help you. Let me know if you have any more questions.
It looks like you are having the same problem described here: https://github.com/kubernetes/kubernetes/issues/43924.
This solution should probably work for you:
As far as I remember this was a docker issue, not a kubernetes one.
Docker does not use linux's ca certs. Nobody knows why.
You have to install those certs manually (on every node that could
spawn those pods) so that docker can use them:
/etc/docker/certs.d/mydomain.com:1234/ca.crt
This is a highly annoying issue as you have to butcher your nodes
after bootstrapping to get those certs in there. And kubernetes spawns
nodes all the time. How this issue has not been solved yet is a
mystery to me. It's a complete showstopper IMO.
Then it's just a question of how to run this for every node. You could do that with a DaemonSet which runs a script from a ConfigMap, as described here: https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets. That article refers to a GitHub project https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial.
The magic is in the DaemonSet.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-initializer
labels:
app: default-init
spec:
selector:
matchLabels:
app: default-init
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: node-initializer
app: default-init
spec:
volumes:
- name: root-mount
hostPath:
path: /
- name: entrypoint
configMap:
name: entrypoint
defaultMode: 0744
initContainers:
- image: ubuntu:18.04
name: node-initializer
command: ["/scripts/entrypoint.sh"]
env:
- name: ROOT_MOUNT_DIR
value: /root
securityContext:
privileged: true
volumeMounts:
- name: root-mount
mountPath: /root
- name: entrypoint
mountPath: /scripts
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
You could modify the script that is in the ConfigMap to pull your cert and put it in the correct directory.
In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.