I have an azure container instance running in a vnet within a subnet.
I have been (until now) able to update the image of this container instance with a command like this one:
az container create \
--resource-group my_rg\
--name containername \
--image containerregistry.azurecr.io/myimage:latest \
--registry-login-server containerregistry.azurecr.io \
--registry-username username \
--registry-password password \
--vnet my_vnet \
--subnet my_subnet
Until now, when I needed to update the image in my container, I would build it, push it to my container registry in azure, and run this command.
The container would stop and restart with the new image.
It may not be the issue but I upgraded my azure cli recently, I am now on version 2.34.1.
When I run this command now I get this message:
(NetworkProfileCannotChange) The network profile of existing container group 'containername' cannot be changed. To change a network profile, you must delete and then create the container group with the changed property.
Code: NetworkProfileCannotChange
I don't want to change my network profile, I just want to update the image.
I have seen it with
az network profile list --resource-group my_rg
It looks fine for me.
I have double checked, my vnet and my subnet have not changed.
I don't understand why this command does not work anymore.
Any idea of what's happening ?
Cheers
Tested in my enviroment it is working fine for me with both the version.
Earlier i was using the AZ CLI version 2.32.0 able to create the Container.
Now i have updated the AZ CLI version to 2.34.1 and trying to change or update the image of container using the same command with same existing VNET and Subnet
Getting Simmillar Kind of Error when i am changing the Subnet name.
Suggestion 1 : Would suggest you to recheck if your existing container is not taking another subnet or VNET when you are creating container with updated images.
Suggestion 2 : Sometimes it might stopes you to update the Image on Running and exist conatainer, you get this error: "If you are going to update the os type, restart policy, network profile, CPU, memory or GPU resources for a container group, you must delete it first and then create a new one"
Suggestion 3 : TO avoid the recreation of conatainer scheduled container instance that is running once a day. Whenever it starts it is pulling the docker image with the :latest tag from the azure container registry. This avoids re-creation of the container group.
For more information You can refer this https://circleci.com/blog/azure-custom-images/ and Related Issue
Related
Need some help figuring out the Azure CLI command and parameters to update cache setting of a live managed data disk attached to an Azure VM.
A managed data disk is created and attached to an Azure VM outside of my control. By default, Host caching is set to Read-Only. For some performance reasons, I would like to set Host caching to None for the attached data disk.
What did I try?
Tried to use below command
az vm update --subscription my-subscription --name my-vm-name --resource-group my-rgroup-name --disk-caching os=ReadWrite
This command updates the OS Disks Host cache setting. However, could not update managed disks setting. I tried different keys like managed, data-disks, attached, etc... instead of os in os=ReadWrite; Nothing worked, command throws ValueError
ValueError: invalid literal for int() with base 10: 'data-disks'
Explored another command - az disk update. However, it does not have option to update managed disk cache setting.
Tried to detach and re-attach the managed disk with --caching set to None. It works. However, this is not permitted in my case.
az vm disk detach --subscription my-subscription -g my-rgroup-name --vm-name my-vm-name --name managed-disk-name
az vm disk attach --subscription my-subscription -g my-rgroup-name --vm-name my-vm-name --name managed-disk-name --caching none
Need this for automation. So changing this setting from Azure portal UI is not an option.
Suggestions are much appreciated.
Without detaching the disk from virtual Machine using the below cmdlet to change the managed disk caching from ReadWrite\ReadOnly to None.
We have tested this cmdlet in our environment by creating a VirtualMachine & attached a Data disk to it and it is working fine.
Here is the Cmdlet:
az vm update --resource-group <resource-group-name> --name <vmName> --disk-caching os=ReadWrite 0(LunNumberOfTheDisk)=None
Here is the sample output screenshot for reference :
Below screenshot show the current state of disk caching before executing the above cmdlet.
In the below screenshot , you can see we have executed the cmdlet without stopping or detaching the disk from the virtual machine and the operation got succeeded without any interruption.
I have hosted a website in azure virtual machine scale set by following the below steps
Create a VM and do the necessary changes/installations in iis.
Create a snapshot of the VM. This ensure that the above instance can be used for future changes.
create a disk from the snapshot.
create a vm from the disk.
RDP to the instance and generalize the instance for deployment (sysprep)
Run %WINDIR%\system32\sysprep\sysprep.exe as admin.
Enter System Out-of-Box Experience (OOBE),
Generalize check box is selected
Shutdown Option = Shutdown
Create Image (capture) from the above instance.
Create VSS from the above image
Suppose their is a change in the web build , Is there a way to update the scale set without following these steps again ?
Azure virtual machine extensions provide capabilities such as post-deployment configuration and management, monitoring, security, and more. Production deployments typically use a combination of multiple extensions configured for the VM instances to achieve desired results.
This is also is a good way to ensure availability of your system. The scale set will apply the update to one VM at a time, leaving the other VMs up and running.
Below example taken from the learn:
Deploy the update by using a custom script extension
In the Azure portal, run the following command to view the current upgrade policy for the scale set:
Azure CLI:
az vmss show \
--name webServerScaleSet \
--resource-group scalesetrg \
--query upgradePolicy.mode
Verify that the upgrade policy is set to Automatic. You specified this policy when you created the scale set in the first lab. If the policy were Manual, you would apply any VM changes by hand. Because the policy is Automatic, you can use the custom script extension and allow the scale set to do the update.
Run the following command to apply the update script:
az vmss extension set \
--publisher Microsoft.Azure.Extensions \
--version 2.0 \
--name CustomScript \
--vmss-name webServerScaleSet \
--resource-group scalesetrg \
--settings "{\"commandToExecute\": \"echo This is the updated app installed on the Virtual Machine Scale Set ! > /var/www/html/index.html\"}"
I removed two nodes of my Kubernetes cluster manually first calling "kubectl drain " and then "kubectl delete " for each. While the cluster seems to work without a problem the Azure UI shows me exactly two nodes more than I see when I use "kubectl get nodes". So when I configure Kubernetes to have 9 nodes in the Azure UI only 7 nodes are there if I take a look with kubectl. Scaling up or down does not solve the problem as Azure is always off by two nodes.
How can I solve this problem? Is there a way I can notify Azure that a node has been deleted?
If you want to solve the issue, you need to have a deeper understanding of the k8s cluster.
When you use the command kubectl delete to remove the node from the agent pool, it means the agent pool won't have control over that node. But it does not mean you really delete the machine. So you can find the number of the machine does not change in the Azure portal. This is the truth you find.
How can I solve this problem? Is there a way I can notify Azure that a
node has been deleted?
Here are two questions. For the first, you can express it in this way:
How to restore the node that deleted before to the agent pool?
It's simple to solve. You only need to restart the kubelet service in that node. For example, you use the VMSS as the agent pool of the AKS and that node instance id is 4. Then you can do it like this:
az vmss run-command invoke --resource-group group_name --name vmss_name --instance-id 4 --command-id RunShellScript --scripts "service kubelet restart"
For the second one, you can only use the Azure command to let Azure know the update. Here it means you can scale the agent pool, for example, using the Azure CLI command:
az aks nodepool --resource-group group_name --name agentpool_name --cluster-name cluster_name --node-count 2
I'm trying to implement Azure Key Vault such that API keys, credentials and other Kubernetes secrets are read into production and staging environments. Ultimately, I'd like to try to expand that to local development environments so devs don't have to mess with it at all. It is just read in when they start their cluster.
Anyway, I'm following this to enable Pod Identities:
https://learn.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity
When I get to this step, I'm modifying the:
az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
To the following because I'm trying to change an existing cluster:
az aks update -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
This doesn't work and figured out I need to run each flag one at a time, so I had to run --enable-managed-identity first since --enable-pod-identity depends on it.
At any rate, when I get to the --enable-pod-identity I get the following error:
Operation failed with status: 'Bad Request'. Details: Network plugin kubenet is not supported to use with PodIdentity addon.
So I try the --network-plugin azure and get:
az: error: unrecognized arguments: --network-plugin azure
Apparently this is flag is not available with update.
Poking around in the Azure portal for the AKS resource, I do see kubenet listed, but I'm not able to change it.
So, the question: Is it possible to change the Network Plugin on existing cluster or do I need to start a new?
EDIT: Looks like others are having similar issues on existing clusters:
https://github.com/Azure/AKS/issues/2094
Is it possible to change the Network Plugin on the existing cluster or do
I need to start a new?
It's impossible to change the network plugin on the existing cluster, so you need to create a new cluster and set the network plugin with azure at the creation time. You can find there is no parameter --network-plugin in the CLI command az aks update even if you install the aks-preview extension. It means it does not support changing the network plugin of the existing cluster.
I just started using Azure. I'm using containers. For each container, I gave a dns-name-label property. Then after few hours of Azure training, I dicided to delete all my containers. Any resource still exist.
I cannot create a new container using a dns-name-label which has been deleted.
The DNS name label 'xxx' in container group 'x' not available. Try using a different label.
I would prefer find a solution rather than change all my dns-name-label because I have an existing software configuration (really long) which is using all these dns-name-label.
Would someone have a solution please ?
I already tried few commands like az cache list or az cache purge.
The error shows that the DNS name still exists. Probably, it has not completely finished deleting the container group or the current DNS name is in use in another container group somewhere.
I can not reproduce your issue. You should use az container delete --name MyContainerGroup --resource-group MyResourceGroup to remove that container group then create your new container with the old DNS name.