I have a web app hosted in Azure with a linux container with image stored in azure container registry.
The web app is working fine and I have a couple of Application settings configured. One of the settings is : WEBSITES_ENABLE_APP_SERVICE_STORAGE = TRUE
However, when I update the container image this specific application setting reverts to false.
I have tried removing the setting and adding it again both via portal and az cli but still no success with keeping the value TRUE after update. Under Diagnose and solve problems -> Web App restarted -> I can see the following message: Your application was recycled as application environment variables changed. This can most likely occur due to update in app settings or swap operation.
But no explanation why its reverted back to false.
Command for updating the container image
az webapp config container set \
--docker-custom-image-name "DOCKER|xx.azurecr.io/imagename:tag" \
--docker-registry-server-password topsecret \
--docker-registry-server-url https://index.docker.io \
--docker-registry-server-user username \
--name MyWebApp \
--resource-group MyResourceGroup
The command is successful and returns the following:
{
"name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
"slotSetting": false,
"value": "false"
},
When I update with a new image I want the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE = TRUE and not revert to false as it does right now. I am not using slot settings but I have tried setting both value and slotSetting to True but still no change.
Solution for this problem is the following:
The commando az webapp config container set has an option to set the storage for the container to True (Being by default False)
The parameter thats need to be added to the command is --enable-app-service-storage -t
Tried and verified that it works, output is now the following:
{
"name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
"slotSetting": false,
"value": "true"
},
If the parameter is not provided the command will override your current app settings.
https://learn.microsoft.com/en-us/cli/azure/webapp/config/container?view=azure-cli-latest
Related
I have a webapp which is running in Node 16, I am updating to node 18. I have used below command to update.
az webapp config set -g RG_NAME -n APP_NAME --linux-fx-version "NODE|18LTS
once done, when I run az webapp config show, I can see its in Node 18, but the UI (Azure portal) doesnt show the latest changes, because the stack is choosen Empty. when I manually set the stack to Node, its showing properly. Is it just UI issue or I have to set any parameters to actually update the stack ?
I have created NodeJS App and deployed to Azure Linux App Service with NODE 16LTS.
Immediately after deploying the app, in Azure Portal I can see the stack as NODE 16.
As you have mentioned, updated the runtime stack with NODE 18 LTS with the given command.
az webapp config set -g YourRGName -n YourAppName --linux-fx-version "NODE|18LTS"
Even I can see the runtime stack setting becoming empty.
Is it just UI issue or I have to set any parameters to actually update the stack ?
AFAIK, we don't have any option to set the parameters while updating the stack.
Manually I have updated the runtime stack and again tried to update the version for the second time with CLI Command.
Again, I see the runtime stack option Empty. This is the default behavior of Azure App Service UI in Portal.
I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.
I'm trying to install an nginx ingress controller into an Azure Kubernetes Service cluster using helm. I'm following this Microsoft guide. It's failing when I use helm to try to install the ingress controller, because it needs to pull a "kube-webhook-certgen" image from a local Azure Container Registry (which I created and linked to the cluster), but the kubernetes pod that's initially scheduled in the cluster fails to pull the image and shows the following error when I use kubectl describe pod [pod_name]:
failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
This section describes using helm to create an ingress controller.
The guide describes creating an Azure Container Registry, and link it to a kubernetes cluster, which I've done successfully using:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
I then import the required 3rd party repositories successfully into my 'local' Azure Container Registry as detailed in the guide. I checked that the cluster has access to the Azure Container Registry using:
az aks check-acr --name MyAKSCluster --resource-group myResourceGroup --acr letsencryptdemoacr.azurecr.io
I also used the Azure Portal to check permissions on the Azure Container Registry and the specific repository that has the issue. It shows that both the cluster and repository have the ACR_PULL permission)
When I run the helm script to create the ingress controller, it fails at the point where it's trying to create a kubernetes pod named nginx-ingress-ingress-nginx-admission-create in the ingress-basic namespace that I created. When I use kubectl describe pod [pod_name_here], it shows the following error, which prevents creation of the ingress controller from continuing:
Failed to pull image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": [rpc error: code = NotFound desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
This is the helm script that I run in a linux terminal:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-basic --set controller.replicaCount=1 --set controller.nodeSelector."kubernetes\.io/os"=linux --set controller.image.registry=$ACR_URL --set controller.image.image=$CONTROLLER_IMAGE --set controller.image.tag=$CONTROLLER_TAG --set controller.image.digest="" --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux --set controller.admissionWebhooks.patch.image.registry=$ACR_URL --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux --set defaultBackend.image.registry=$ACR_URL --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG --set controller.service.loadBalancerIP=$STATIC_IP --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
I'm using the following relevant environment variables:
$ACR_URL=letsencryptdemoacr.azurecr.io
$PATCH_IMAGE=jettech/kube-webhook-certgen
$PATCH_TAG=v1.5.1
How do I fix the authorization?
It seems like the issue is caused by the new ingress-nginx/ingress-nginx helm chart release. I have fixed it by using version 3.36.0 instead of the latest (4.0.1).
helm upgrade -i nginx-ingress ingress-nginx/ingress-nginx \
--version 3.36.0 \
...
Azure support identified and provided a solution to this and essentially confirmed that the documentation in the Microsoft tutorial is at best now outdated against the current Helm release for the ingress controller.
The full error message I was getting was similar to the following, which indicates that the first error encountered is actually that the image is NotFound. The message about Unauthorized is actually misleading. The issue appears to be that the install references 'digests' for a couple of the images required by the install (basically the digest is a unique identifier for the image). The install appears to have been using digests of the docker images from the original location, and not the digest of my copy of the images that I imported into the Azure Container Registry. This obviously then doesn't work, as the digests of the images the install is trying to pull don't match the digests of the images that are imported to my Azure Container Registry.
Failed to pull image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': [rpc error: code = NotFound desc = failed to pull and unpack image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to resolve reference 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to resolve reference 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
The generated digest for the images that I'd imported into my local Azure Container Registry needed to be specified as additional arguments to the helm install:
--set controller.image.digest="sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" \
--set controller.admissionWebhooks.patch.image.digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" \
I then had a 2nd issue where I was getting CrashLoopBackoff for the ingress controller pod. I fixed this by re-importing a different version of the ingress controller image than the one referenced in the tutorial, as follows:
set environment variable used to identify the tag to pull for the ingress controller image
CONTROLLER_TAG=v1.0.0
delete the ingress repository from the Azure Container Registry (I did this manually via the portal), then re-import it using the following (the values of other variables should be as specified in the Microsoft tutorial):
az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
Make sure you guys set all the digests to empty
--set controller.image.digest=""
--set controller.admissionWebhooks.patch.image.digest=""
--set defaultBackend.image.digest=""
Basically, this will pull the image <your-registry>.azurecr.io/ingress-nginx/controller:<version> without the #digest:<digest>
The other problem, if you use the latest chart version, the deployment will crash into CRASHLOOPBACKOFF status. Checking the live log of the pod, you will see the problem with flags, eg Unknown flag --controller-class. To resolve this problem, you could specify the -version flag in the helm install command to use the version 3.36.0. All deployment problems should be resolved.
Faced the same issue on AWS and using a older version of the helm chart helped.
I used the version 3.36.0 and it worked fine.
I run the same code snippet as for other extensions:
az vm extension set \
--resource-group "azure-vm-arm-rg" \
--vm-name "azure-vm" \
--name "WindowsAgent.AzureSecurityCenter" \
--publisher "Qualys"
..and I'm getting:
The handler for VM extension type 'Qualys.WindowsAgent.AzureSecurityCenter'
has reported terminal failure for VM extension 'WindowsAgent.AzureSecurityCenter'
with error message: 'Enable failed for plugin (name: Qualys.WindowsAgent.AzureSecurityCenter,
version 1.0.0.10) with exception Command
C:\Packages\Plugins\Qualys.WindowsAgent.AzureSecurityCenter\1.0.0.10\enableCommandHndlr.cmd
of Qualys.WindowsAgent.AzureSecurityCenter has exited with Exit code: 4306'.
I have no issues installing this extension via Azure UI in Security Center
I suspect license to be the root cause but I don't have any dedicated licenses, I believe Security center manages them automatically
Any ideas how to install Qualys extension automatically?
I encountered the same issue. It was because the extension was added too soon after the vm had started. The pre-req is that the Azure Virtual Machine agent should be running on the vm before the extension is added.
for my solution, I added dependencies on other extensions before running this extension. That gave enough time for the machine to start and have the Azure Virtual Machine agent running before qualys extension is added.
{
"type": "microsoft.compute/virtualmachines/providers/serverVulnerabilityAssessments",
"apiVersion": "2015-06-01-preview",
"name": "[concat(parameters('virtualMachineName'), '/Microsoft.Security/Default')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'))]",
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'), '/extensions/AzurePolicyforWindows')]",
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'), '/extensions/Microsoft.Insights.VMDiagnosticsSettings')]",
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'), '/extensions/AzureNetworkWatcherExtension')]"
]
}
Make sure you have no Azure Policies configured which do things like require tags, as this can block the extension installation and only give the error message The resource operation completed with terminal provisioning state 'Failed'..
I tried to add a custom script to VM through extensions. I have observed that when vm is created, Microsoft.Azure.Extensions.CustomScript type is created with name "cse-agent" by default. So I try to update extension by encoding the file with script property
az vm extension set \
--resource-group test_RG \
--vm-name aks-agentpool \
--name CustomScript \
--subscription ${SUBSCRIPTION_ID} \
--publisher Microsoft.Azure.Extensions \
--settings '{"script": "'"$value"'"}'
$value represents the script file encoded in base 64.
Doing that gives me an error:
Deployment failed. Correlation ID: xxxx-xxxx-xxx-xxxxx.
VM has reported a failure when processing extension 'cse-agent'.
Error message: "Enable failed: failed to get configuration: invalid configuration:
'commandToExecute' and 'script' were both specified, but only one is validate at a time"
From the documentation, it is mentioned that when script attribute is present,
there is no need for commandToExecute. As you can see above I haven't mentioned commandToExecute, it's somehow taking it from previous extension. Is there a way to update it without deleting it? Also it will be interesting to know what impact will cse-agent extension will create when deleted.
FYI: I have tried deleting 'cse-agent' extension from VM and added my extension. It worked.
the CSE-AGENT vm extension is crucial and manages all of the post install needed to configure the nodes to be considered a valid Kubernetes nodes. Removing this CSE will break the VMs and will render your cluster inoperable.
IF you are interested in applying changes to nodes in an existing cluster, while not officially supported, you could leverage the following project.
https://github.com/juan-lee/knode
This allows you to configure the nodes using a DaemonSet, which helps when you node pools have the auto-scaling feature enabled.
for simple Node alteration of the filesystem, a privilege pod with host path will also work
https://dev.to/dannypsnl/privileged-pod-debug-kubernetes-node-5129