Has anyone ever tried to update the existing environment variable values after the container instance is provisioned in azure using Azure ACI?
Currently, it seems that there is no way to update them either using portal or using Azure CLI.
Thanks in advance.
This is covered in the following GitHub issue:
https://github.com/MicrosoftDocs/azure-docs/issues/31168
In that issue, we point to the following document:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-update
All in all, you can update the variables however it still means recreating the container or "Redeploying" it with the update variables which in turns terminates the container and deploys a new one. So a bit of a yes and no answer and scenario.
Related
I have an Azure Container Instance. Now I want to update this Container. I try to integrate the Log Analytics workspace and I have a WORKSPACE_ID and WORKSPACE_KEY.
I'm following this Azure Documentation but it has only created Container Example. But I need an Update example.
Can anyone help me to update the Azure Container Instance? Or any example or any documentation.
Restart Policy of the Container Instance Group should be 'Always' if it is 'On Failure' I think it may not work ...
We were experiencing timeouts the first time a function from a Function App was being called so we move from a normal to premium service plan as in theory you can have always a warmed instance ready to answer a call (based on this documentation).
The thing is that when trying to configure the functionality we cannot see the setting present in the documentation. This is our portal:
And this is the portal settings appearing in the documentation:
The functions are still timing out the first time they are called so we did not see any difference moving from a normal plan to the premium one. Are we missing something?
Didn't reproduce your problem, I can set the pre-warmed instances on my portal. But I have an idea to solve this problem.
Try to use powshell in Azure Cloud Shell to configure the pre-warmed instances of your function app instead of using portal:
az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.preWarmedInstanceCount=<desired_prewarmed_count> --resource-type Microsoft.Web/sites
Test whether it can be setted. If can't, have a look of the error. This may be an error with the portal.
If you have doubts, please let me know.
Turns out that you need to modify the Function App settings while we were tweaking the Service Plan.
We have there the warmed instances setting but it is not having the expected result, we are still facing the timeouts we had with the normal Service Plan.
I have been looking into the "logic apps designer' of Microsoft azure for a couple of days. Thank you for your help! I am stuck on the following:
Context
I wanted to perform some actions interacting between multiple files in a Dropbox. The logic app was not proposing an off-the-self solution, hence I created a python script that did exactly what I wanted.
I then decided to create an image of this script in order to be able to use it from the azure platform within the Logic Apps.
The containers registry contains the image I pushed to Azure and I created the container instance that includes only one image which is the python script.
Everything works.
Current structure
From what I read, it seems that we can run the container instance by using the action called create group container then adding a until action (run until state is equal to Succeeded) and finally using delete the container group.
I have a trigger that has been tested and that works.
Issue
When running the Logic App, the action create group container is failing:
"code": "InaccessibleImage",
"message": "The image '<name_of_the_image>' in container group '<name_of_the_group>' is not accessible. Please check the image and registry credential."
Question
How can I correct what seems to be a basic error on my part?
Where can this registry credential be appropriately corrected?
Update
I have tried removing everything, assigning myself "owner" role in the container registry, then adding the container instance, assigning myself "owner" role in the container instance, then rebuilt the logic app. I ran it again and I get the same error.
I figured the issue.
Since in my case, it is a private container registry, I needed to add the following the the action 'create group container': properties.imageRegistryCredentials.
In this, you will be required to enter the following information that are available in the Access keys of the container registry:
[
{
"password": "<yourpassword>",
"server": "<yourloginserver>",
"username": "<yourusername>"
}
]
So glad and I hope it helps others!
To set the credentials of ACI inside Create or update container group task in logic app you need to add a parameter (See the picture).
add parameter for ACI credentials
I'm using Azure DevOps pipelines to update our deployment in K8s cluster in Azure. It used to be working fine until yesterday, as for some reason the Pods in the cluster remain in their previous state. I can see that the image was successfully updated in ACR (container registry) and has a label 'latest'. However, the release pipeline doesn't seem to be doing anything useful. I use 'set' command in the task to update the Pod (it is well described in the Kubernetes docs and cheatsheet here)
This is the command sample extracted from the log:
kubectl set image deployments/identityserver identityserver='myacr'/identityserver:latest -n identityserver-dev
As it indicates, I'm getting the latest image from ACR and trying to roll an update. It executes well (both in cmd and Azure DevOps). no errors, although, the Pod remains unaffected. Have I missed something in the docs? Should I raise the ticket with Microsoft?
why do you have ' in image name? also, latest wont work if you already have latest on the image, you need to be specific https://github.com/kubernetes/kubernetes/issues/33664.
This is not an Azure issue
Please check here answers to similar question on SO, on why it is not a good option to use :latest tag in your Deployment spec, along with workarounds provided.
I have a problem on my build/release pipeline with Azure Container Reigstry.
I use a Azure Resource Group Deployment task to deploy Azure Container Registry (and other stuff) and it works perfectly.
I have the loginServer, username and password in output variables to reuse it.
Then I want to build and push image to ACR but I can't set the name of the registry (that I get from output variable) with a variable. I have to choose the registry when I setup the definition, but it is not created at this moment.
Is there a way to do this ?
As a workaround, I use the Azure Resource Group Deployment the create the registry and then I send output variables to a powershell script which build, tag and push my images to the registry.
If nobody has a better way, I think I will post a uservoice to change that.
When you say you use an Azure Resource Group Deployment task, are you referring to VSTS?
If you could provide more specific repro steps, I might be more helpful.
I'd also suggest you might take a look at https://aka.ms/acr/build as easy way to natively docker build images with your registry. ACR Build is now available in all regions and simplifies may of the experiences you may be hitting.
Daniel just made this post that helps with the VSTS integration: https://www.danielstechblog.io/building-arm-based-container-images-with-vsts-and-azure-container-registry-build/
Steve
Sorry for the delay, I was off the office.
I just retry to fix my problem and it seems that I can now enter a free text (and so, a release variable) to the VSTS docker task to specify the ACR I just created before with a Azure Resource Group Deployment task.
So no problem anymore.
Thank you for your response, I will take a look to ACR build :)
Bastien