Quite me to this stuff, but I seem to miss something. I push an image to a private Azure registry and spawned a container instance right through the portal. Works like a charm. Now I changed something, pushed again and... What do I do? Kill and delete the instance and recreate? Everytime?
Br,
Daniel
Another good way to do this these days is to restart the container. You can run the Azure CLI command or do it from the front end.
az container restart -g="XXX" -n="XXX"
An added benefit is that your public IP stays the same.
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-restart
You don't need to delete every time! Just make sure your docker tag is always the same then after pushing you can just restart your app service.
tag example
docker tag myimage image.azure.ac/myimage:latest
latest in this case it's my tag
Generally - yes. But you can create a webhook that will invoke something when a new image is pushed to the repo. That can act as a way to automate redeployment.
possible solution is to use Azure Managed DNS name for container:
az container create -n helloworld --image microsoft/aci-helloworld -g myResourceGroup --dns-name-label mycontainer
this way your dns name will always stay the same
Just restart the container using the portal like the following:
Or use the following Azure CLI command if the container is on a running state.
az container restart --name <container_instance_name> --resource-group <RG_name>
To confirm the successful pull/update of the image, you should see a pull event before the start event near the time you restarted, like the following:
Related
I have an Azure ContainerApp deployed with a single revision, and I'd like to stop it - but not delete it and have to re-deploy it. I see the image in the registry, and there are options via the portal to deploy it to an AppService or ContainerInstance, but not to a ContainerApp.
I also have looked through the az CLI, specifically az containerapp, but see no way to stop a running instance. I can set the scale to 0-1, but it still runs.
Am I missing something? Stopping an instance seems like a pretty normal thing to do...
EDIT - Setting all revisions to inactive doesn't seem to be allowed. See images below.
You can deactivate a revision to shut down the containers. If you deactivate all active revisions, you will effectively stop your containerapp.
https://learn.microsoft.com/en-us/azure/container-apps/application-lifecycle-management
Once a revision is no longer needed, you can deactivate a revision with the option to reactivate later. During deactivation, containers in the revision are shut down.
When you need it again, you can use activate to get new replicas.
az containerapp revision deactivate --resource-group
--revision
[--name]
az containerapp revision activate --resource-group
--revision
[--name]
If you visit the containerapp url after deactivating all revisions, you will receive an error:
Error 403 - This Container App is stopped.
I'm using Minikube for development and I need to build a k8s app that pull all images from ACR, all images stored already on ACR.
To pull images from azure what I need to is to create secret with user&pass of the azure account and pass this secret to every image that I want to pull using imagePullSecrets (documentation here)
There is a way to add this registry as a global setting for namespace, or the project?
I don't understand why every image needs to get the secret implicitly in the spec.
Edit:
Thanks for the comments I'll check them later, for now I resolve this problem at minikube level. there is a way to set a private registry in minikube (doc here)
In my version this bug exists, and this answer resolve the problem.
As I know, if you do not use the K8s in Azure, I mean the Azure Kubernetes Service, then there are two ways I know the pull the images from ACR. One is the way you know that using the secrets. And another is to use the service account, but you also need to configure it in each deployment or the pods the same way as the secrets.
If you use the Azure Kubernetes Service, then you just need to assign the AcrPull role to the service principal of the AKS, and then you need to set nothing for each image.
You can add imagePullSecrets to a service account (e.g. to the default serviceaccout).
It will automatically add imagePullSecrets to the pod spec that has assigned this specific (e.g. default) serviceaccount, so you don't have to do it explicitly.
You can do it running:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
You can verify it with:
$ kubectl run nginx --image=nginx --restart=Never
$ kubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{"\n"}'
myregistrykey
Also checkout the k8s docs add-image-pull-secret-to-service-account.
In my case, I had a local Minikube installed in order to test locally my charts and my code. I tried most of the solutions suggested here and in other Stack Overflow posts and the following are the options I found out :
Move the image from the local Docker registry to Minikube's registry and set the pullPolicy to Never or IfNotPresent in your chart.
docker build . -t my-docker-image:v1
minikube image load my-docker-image:v1
$ minikube image list
rscoreacr.azurecr.io/decibel:0.0.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
...
##Now edit your chart and change the `pullPolicy`.
helm install my_name chart/ ## should work.
I think that the main disadvantage of this option is that you need to change your chart and remember to change the values to their previous value.
Create a secret that holds the credentials to the acr.
First login to the acr via :
az acr login --name my-registry.azurecr.io --expose-token
The output of the command should show you a user and an access token.
Now you should create a Kubernetes secret (make sure that you are on the right Kubernetes context - Minikube) :
kubectl create secret docker-registry my-azure-secret --docker-server=my-registry.azurecr.io --docker-username=<my-user> --docker-password=<access-token>
Now, if your chart uses the default service account (When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace) you should edit the service account via the following command :
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-azure-secret"}]}'
I didn't like this option because if I have a different secret provider for every helm chart I need to overwrite the yaml with the imagePullSecrets.
Another alternative you have is using Minikube's registry creds
Personally, the solution I went for is the first solution with a tweak, instead of adding the pullPolicy in the yaml itself, I overwrite it when I install the chart :
$ helm install --set image.pullPolicy=IfNotPresent <name> charts/
I have created a clamav container instance (mkodockx/docker-clamav:alpine) within Azure but every few days it tends to be recreated by itself pulling the image over and over again, as you can see in the screengrab.
Is something wrong? Why would it be doing this?
I also have an Azure App Service which makes calls to the container instance over port 3310 but every few days it can't reach it...
What is going on? Why can't it be reached? I reached out to Azure support but they were super unhelpful
I'm no expert in containerization so please dumb it down for me :)
Thanks
I would start with setting the Restart Policy of the Azure Container Groups
There are three policies for the Azure Container Groups :
Never
Always
On Failure
The default policy is Always
You could change the Policy and observe the output in your case :
You could use the below commandlet to change the Restart Policy :
az container create \
--resource-group <myResourceGroup> \
--name <mycontainer> \
--image <IMAGE> \
--restart-policy OnFailure #(or Never)
Reference
I've created an web app running the grafana docker image as this
az group create --name grp-test-container-1
--location "West Europe"
az appservice plan create --name asp-test-container-1
--resource-group grp-test-container-1
--sku B1
--is-linux
az webapp create --resource-group grp-test-container-1
--plan asp-test-container-1
--name app-test-container-1
--deployment-container-image-name grafana/grafana:latest
Then I updated the appsettings in order to pass env variables to the docker run command
az webapp config appsettings set --name app-test-container-1
--settings GF_INSTALL_PLUGINS='grafana-azure-monitor-datasource'
--resource-group grp-test-container-1
Then I need to restart the container in order to get the added env variable in the docker run command.
I tried to restart the web app, stop/start it, change the docker image name and save under the Container Settings.. nothing works
Any suggestions?
Solution/Bug
As Charles Xu said in his answer, to reload the container you need to change the docker image and save in order to make the web app fetch the image again and apply the added env variables.
In my case, I did that change and then looked at the log output but the log never got updated. I waited 5-10 minutes and still no logs were added..
But when I visit the site and browsed to the extension, which were installed by the env varibles, I could se that it all had gone through.
So, to summarize: The log in the Container Settings are not to be trusted, when doing a change those changes might not show up in the log.
I just got off the phone with a support engineer from the Azure web apps/app services team, after having pulled my hair out for several days. Literally.
So in case anyone else having trouble with their app service not responding to restarts, configuration changes, docker image changes, etc, you can try this:
In the Azure Portal navigate to your app service and then "Configuration"->"General settings" and set the "Always on" setting to "On". This setting is by default set to "Off" and will make the app service enter an "idle state" after some time of not receiving any requests.
The only way to trigger the app service out of this idle state is to perform a request towards it. Restarts, config changes, docker image changes, etc., will have no effect until that first request is done. Setting the "Always on" setting to "On" will prevent the app service from entering this idle state, and so it will always be responsive.
In terms of cost this setting change shall have no impact. That is unless you are trying to force as many applications into a single app service plan as possible, where many of them are seldomly in use and hence in an idle state will not use any resources of your app service plan total.
What you need to do is change the image from grafana/grafana:latest into grafana/grafana, just delete the version latest and click the save button below. Then it will work.
I have a container building in gitlab and registering itself with the gitlab custom registry. Inside this container is a command that runs a very long time. I would like to somehow deploy this container to azure, and only kick off this long running process inside a new container instance on demand from an administrative api service. I don't want the container running all the time, only for the time it takes to run the command.
I was thinking that this admin api could be a classic http rest api service hosted under Azure "App Services", or possibly using the new "Function Apps" feature of Azure.
In my research, I found that using the azure cli commands, I can start a container like so:
az container create \
--resource-group myResourceGroup \
--name mycontainer2 \
--image microsoft/aci-wordcount:latest \
--restart-policy OnFailure \
--environment-variables NumWords=5 MinLength=8
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables
I would like to do this from the admin api, preferably using what looks to me like the official Azure npm package located here:
https://www.npmjs.com/package/azure
Ideally, it would be a single command to create and start the instance, being able to set the environment variables like this example at the launch of the container is important to me. I'm not interested in moving all my code over into Azure, I would like to continue using gitlab for the source code and container registry but if there is some reason I have to switch to using the Azure container registry, I need a way to somehow move the container registration over there using the gitlab ci yaml.
In all my searching, I couldn't find any way to do this but the closest documentation I found was here:
https://learn.microsoft.com/en-us/javascript/api/azure-arm-containerservice/containerserviceclient?view=azure-node-latest
At the current time there is no way to officially do this from the api, maybe in the future there will be