Stop container instances group when one of the containers is terminated - azure

I am creating a container group with a container that runs E2E on a website. How can I stop the entire group when one of the containers have stop running? (in this case the E2E tests)
I am creating this through a pipeline and I need to stop the front end container one the test are done.
apiVersion: 2018-10-01
location: northeurope
name: e2e-uat
properties:
containers:
# name of the instance in Azure.
- name: e2etestcafe
properties:
image: registry.azurecr.io/e2e/e2etestcafe:latest
resources:
requests:
cpu: 1
memoryInGb: 3
- name: customerportal
properties:
image: registry.azurecr.io/e2e/customerportal:latest
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 80
osType: Linux
restartPolicy: never
tags: null
type: Microsoft.ContainerInstance/containerGroups

For this requirement, the ACI does not have the feature that you expect as I know. So you need to check the containers' state yourself.
I recommend you create a script with a loop to check the containers' state until it meets the situation you expect, then stop the whole container group. In the Azure DevOps, you can use the release pipeline with three stages, one for creation, second for checking the state with running the script, third for stop the container group.
To check the containers' state, I think the CLI command is helpful below:
az container show -g myResourceGroup -n myContainerGroup --query containers[*].instanceView.currentState.state
It will output all the containers' state in an array.

Related

Azure DevOps Build Agents in Kubernetes

We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]

Host a Docker container in Azure and have it active during a specific period of the day?

I would like to have a docker container active only during certain time's in the day so that a Test Automation can run? Is it possible?
Yes it is possible. Cronjobs is designed to run a job periodically on a given schedule, written in Cron format. A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
To run your automation tests
You should create a Cronjob definition
Set the cron timer
Call your CMD
Here is a sample Hello Wordl example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
You haven't given much information beside running and stopping a container. One of the simplest way is to use the Docker CLI to run an instance of your container in Azure Container Instances. You create and use a context for Azure and then, using docker run will create an ACI and run your container in Azure.
docker login azure
docker context create aci myacicontext
docker context use myacicontext
docker run -p 80:80 [yourImage]
docker rm [instanceName]
Ref: https://www.docker.com/blog/running-a-container-in-aci-with-docker-desktop-edge/ https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli

Azure Containers deployment - "Operation failed with status 200: Resource State Failed"

From Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we execute az container create command we can login successfully to our private registry (e.g fa-docker-snapshot-local.docker.comp.dev on JFrog Artifactory ) after entering password and we can docker pull it as well
docker login fa-docker-snapshot-local.docker.comp.dev -u svc-faselect
Login succeeded
So we can pull it successfully and the image path is the same like when doing manually docker pull:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
We have YAML file for deploy, and trying to create container using the az command from the SAME server. In the YAML file we have set up the same registry information: server, username and password and the same image
az container create --resource-group FRONT-SELECT-NA2 --file ads-azure.yaml
When we try to execute this command, it takes for 30 minutes and after that message is displayed: "Deployment failed. Operation failed with status 200: Resource State Failed"
Full Yaml:
apiVersion: '2019-12-01'
location: eastus2
name: ads-test-group
properties:
containers:
- name: front-arena-ads-test
properties:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
environmentVariables:
- name: 'DBTYPE'
value: 'odbc'
command:
- /opt/front/arena/sbin/ads_start
- ads_start
- '-unicode'
- '-db_server test01'
- '-db_name HEDGE2_ADM_Test1'
- '-db_user sqldbadmin'
- '-db_password pass'
- '-db_client_user HEDGE2_ADM_Test1'
- '-db_client_password Password55'
ports:
- port: 9000
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 4
volumeMounts:
- mountPath: /opt/front/arena/host
name: ads-filesharevolume
imageRegistryCredentials: # Credentials to pull a private image
- server: fa-docker-snapshot-local.docker.comp.dev
username: svcacct-faselect
password: test
ipAddress:
type: Private
ports:
- protocol: tcp
port: '9000'
volumes:
- name: ads-filesharevolume
azureFile:
sharename: azurecontainershare
storageAccountName: frontarenastorage
storageAccountKey: kdUDK97MEB308N=
networkProfile:
id: /subscriptions/746feu-1537-1007-b705-0f895fc0f7ea/resourceGroups/SELECT-NA2/providers/Microsoft.Network/networkProfiles/fa-aci-test-networkProfile
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Can you please help us why this error occurs?
Thank you
According to my knowledge, there is nothing wrong with your YAML file, I only can give you some possible reasons.
Make sure the configurations are all right, the server URL, username, and password, also include the image name and tag;
Change the port from '9000' into 9000``, I mean remove the double quotes;
Take a look at the Note, maybe the mount volume makes a crash to the container. Then you need to mount the file share to a new folder, I mean the new folder that does not exist before.

Getting 'didn't match node selector' when running Docker Windows container in Azure AKS

In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.

How to link docker container on Azure?

I would like to link two Docker containers deployed on Azure (ACS).
I have a container running the api server made in NodeJs and another container running Mongo.
I'd like to use something like "--link mymongodb" as I do on my pc, but there is no such parameter in az container create.
To create the containers I use this syntax:
az container create --name my-app --image myprivateregistry/my-app --resource-group MyResourceGroup --ports 80 --ip-address public
Probably I need to create a Virtual Network?
Could you point me to the right direction please?
I think you are searching the features like docker compose on Azure. If you want to use the Azure Container Instance, you should take a look at Deploy a multi-container container group with YAML or with Azure Template. It will help you to create multi-container in a container group and the containers can connect to each other.
In addition, you can try with Azure Kubernetes Service, maybe it also can help you. If you need more help please give me the message.
You will need to specify both images (your app and mongo) in an Azure yml file. It looks like a docker compose yml file, but it isn't.
Assuming your node.js app runs on port 3000, this could be a yml configuration for Azure container services:
apiVersion: 2018-06-01
location: westeurope
name: my-app-with-mongo
properties:
containers:
- name: mongodb
properties:
image: mongo
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
- name: my-app
properties:
image: myprivateregistry/my-app
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 3000
osType: Linux
ipAddress:
type: Public
dnsNameLabel: my-app
ports:
- protocol: tcp
port: '3000'
imageRegistryCredentials:
- server: myprivateregistry.azurecr.io
username: username-for-myprivateregistry
password: password-for-myprivateregistry
tags: null
type: Microsoft.ContainerInstance/containerGroups
Simply start it with
az container create --resource-group MyResourceGroup --file azure-container-group.yml
You can then access your mongo database on localhost:27017 as all containers run on the same host:
Azure Container Instances supports the deployment of multiple
containers onto a single host by using a container group.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-multi-container-yaml
Also mind the order of the containers you specify in the yml file. You will want to start mongo first, then you node.js app as it probably wants to connect to mongo on start.

Resources