Azure Container Instance being recreated - azure

I have created a clamav container instance (mkodockx/docker-clamav:alpine) within Azure but every few days it tends to be recreated by itself pulling the image over and over again, as you can see in the screengrab.
Is something wrong? Why would it be doing this?
I also have an Azure App Service which makes calls to the container instance over port 3310 but every few days it can't reach it...
What is going on? Why can't it be reached? I reached out to Azure support but they were super unhelpful
I'm no expert in containerization so please dumb it down for me :)
Thanks

I would start with setting the Restart Policy of the Azure Container Groups
There are three policies for the Azure Container Groups :
Never
Always
On Failure
The default policy is Always
You could change the Policy and observe the output in your case :
You could use the below commandlet to change the Restart Policy :
az container create \
--resource-group <myResourceGroup> \
--name <mycontainer> \
--image <IMAGE> \
--restart-policy OnFailure #(or Never)
Reference

Related

Failed to pull image - Azure AKS

I have been following this guide to deploy application on Azure using AKS
Every thing was fine until I deployed, one node is in not ready state with ImagePullBackOff status
kubectl describe pod output
Performing below command I get success command, so I am sure authentication is happening
az acr login --name siddacr
and this command lists out the image which was uploaded
az acr repository list --name <acrName> --output table
I figured out.
The error was in the name of the image in deployment.yml file
imagebackpulloff might be caused because of the following reasons:
The image or tag doesn’t exist
You’ve made a typo in the image name or tag
The image registry requires authentication
You’ve exceeded a rate or download limit on the registry

Pull images from an Azure container registry to a Kubernetes cluster

I have followed this tutorial microsoft_website to pull images from an azure container. My yaml successfully creates a pod job, which can pull the image, BUT only when it runs on the agentpool node in my cluster.
For example, adding nodeName: aks-agentpool-33515997-vmss000000 to the yamlworks fine, but specifying a different node name, e.g. nodeName: aks-cpu1-33515997-vmss000000, the pod fails. The error message I get with describe pods is Failed to pull image and then kubelet Error: ErrImagePull.
What I'm missing?
Create secret:
kubectl create secret docker-registry <secret-name> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
As #user1571823 told solution to the problem is deleting the old image from the acr and creating/pushing a new one.
The problem was related to some sort of corruption in the image saved in the azure container registry (acr). The reason why one agent pool could pulled the image was actually because the image already existed in the VM.
Henceforth as #andov said it is good option to open an incident case to Azure support for AKS from your subscription, where AKS is deployed. The support team has full access to the AKS service backend and they can tell exactly what was causing your problem.
Four things to check:
Is it a subscription issue? Are the nodes in different subscriptions?
Is it a rights issue? Does the service principle of the node have rights to pull the image.
Is it a network issue? Are the nodes on different subnets?
Is there something with the image size or configuration, that means that it cannot run on the other cluster.
Edit
New-AzAksNodePool has a parameter -DefaultProfile
It can be AzContext, AzureRmContext, AzureCredential
If this is different between your nodes it would explain the error

Azure WebApp container restart issue

I've created an web app running the grafana docker image as this
az group create --name grp-test-container-1
--location "West Europe"
az appservice plan create --name asp-test-container-1
--resource-group grp-test-container-1
--sku B1
--is-linux
az webapp create --resource-group grp-test-container-1
--plan asp-test-container-1
--name app-test-container-1
--deployment-container-image-name grafana/grafana:latest
Then I updated the appsettings in order to pass env variables to the docker run command
az webapp config appsettings set --name app-test-container-1
--settings GF_INSTALL_PLUGINS='grafana-azure-monitor-datasource'
--resource-group grp-test-container-1
Then I need to restart the container in order to get the added env variable in the docker run command.
I tried to restart the web app, stop/start it, change the docker image name and save under the Container Settings.. nothing works
Any suggestions?
Solution/Bug
As Charles Xu said in his answer, to reload the container you need to change the docker image and save in order to make the web app fetch the image again and apply the added env variables.
In my case, I did that change and then looked at the log output but the log never got updated. I waited 5-10 minutes and still no logs were added..
But when I visit the site and browsed to the extension, which were installed by the env varibles, I could se that it all had gone through.
So, to summarize: The log in the Container Settings are not to be trusted, when doing a change those changes might not show up in the log.
I just got off the phone with a support engineer from the Azure web apps/app services team, after having pulled my hair out for several days. Literally.
So in case anyone else having trouble with their app service not responding to restarts, configuration changes, docker image changes, etc, you can try this:
In the Azure Portal navigate to your app service and then "Configuration"->"General settings" and set the "Always on" setting to "On". This setting is by default set to "Off" and will make the app service enter an "idle state" after some time of not receiving any requests.
The only way to trigger the app service out of this idle state is to perform a request towards it. Restarts, config changes, docker image changes, etc., will have no effect until that first request is done. Setting the "Always on" setting to "On" will prevent the app service from entering this idle state, and so it will always be responsive.
In terms of cost this setting change shall have no impact. That is unless you are trying to force as many applications into a single app service plan as possible, where many of them are seldomly in use and hence in an idle state will not use any resources of your app service plan total.
What you need to do is change the image from grafana/grafana:latest into grafana/grafana, just delete the version latest and click the save button below. Then it will work.

How can I kick off a container instance using the Azure api?

I have a container building in gitlab and registering itself with the gitlab custom registry. Inside this container is a command that runs a very long time. I would like to somehow deploy this container to azure, and only kick off this long running process inside a new container instance on demand from an administrative api service. I don't want the container running all the time, only for the time it takes to run the command.
I was thinking that this admin api could be a classic http rest api service hosted under Azure "App Services", or possibly using the new "Function Apps" feature of Azure.
In my research, I found that using the azure cli commands, I can start a container like so:
az container create \
--resource-group myResourceGroup \
--name mycontainer2 \
--image microsoft/aci-wordcount:latest \
--restart-policy OnFailure \
--environment-variables NumWords=5 MinLength=8
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables
I would like to do this from the admin api, preferably using what looks to me like the official Azure npm package located here:
https://www.npmjs.com/package/azure
Ideally, it would be a single command to create and start the instance, being able to set the environment variables like this example at the launch of the container is important to me. I'm not interested in moving all my code over into Azure, I would like to continue using gitlab for the source code and container registry but if there is some reason I have to switch to using the Azure container registry, I need a way to somehow move the container registration over there using the gitlab ci yaml.
In all my searching, I couldn't find any way to do this but the closest documentation I found was here:
https://learn.microsoft.com/en-us/javascript/api/azure-arm-containerservice/containerserviceclient?view=azure-node-latest
At the current time there is no way to officially do this from the api, maybe in the future there will be

Restart Azure Container Instance

Quite me to this stuff, but I seem to miss something. I push an image to a private Azure registry and spawned a container instance right through the portal. Works like a charm. Now I changed something, pushed again and... What do I do? Kill and delete the instance and recreate? Everytime?
Br,
Daniel
Another good way to do this these days is to restart the container. You can run the Azure CLI command or do it from the front end.
az container restart -g="XXX" -n="XXX"
An added benefit is that your public IP stays the same.
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-restart
You don't need to delete every time! Just make sure your docker tag is always the same then after pushing you can just restart your app service.
tag example
docker tag myimage image.azure.ac/myimage:latest
latest in this case it's my tag
Generally - yes. But you can create a webhook that will invoke something when a new image is pushed to the repo. That can act as a way to automate redeployment.
possible solution is to use Azure Managed DNS name for container:
az container create -n helloworld --image microsoft/aci-helloworld -g myResourceGroup --dns-name-label mycontainer
this way your dns name will always stay the same
Just restart the container using the portal like the following:
Or use the following Azure CLI command if the container is on a running state.
az container restart --name <container_instance_name> --resource-group <RG_name>
To confirm the successful pull/update of the image, you should see a pull event before the start event near the time you restarted, like the following:

Resources