Azure App Service - docker-compose file issues - azure

I have some issues with deployment.
When I run command below to push configuration it pushes corrupted data:
az webapp config container set --resource-group ${AZURE_RG_NAME} --name ${AZURE_APP_NAME} --multicontainer-config-type compose --multicontainer-config-file deploy/docker-compose.yml
As I see, sent encoded data (as base64) cannot be decoded properly:
{
"name": "DOCKER_CUSTOM_IMAGE_NAME",
"value": "COMPOSE|dmvyc2lvbjogjzmncnnlcnzpy2..." //here
}
When I try to dump base64 file, encoded by myself, it decodes correctly. I have checked encoding for both files and these are UTF-8.
This is how it looks in azure configuration page.

I have talked with Azure support about this and they have confirmed the bug and a release is on its way. Here is a bug report and link to the fix: https://github.com/Azure/azure-cli/issues/14208
Now known ETA for the deploy, but the support engineer guessed around the beginning of August.
In the mean time, we have worked around this bug by pasting the contents of the Docker Compose file under "Container Settings" in the Azure Portal.

Related

How to call image inspect properties in hosted in ACR

I have few images hosted in ACR, I want to inspect the image (Repository image) deployed in ACR.
For example I have one "hello-world" image in "test123" ACR. I want to inspect ACR image and read the json content of the image. I didn't see any suitable .NET packages or .NET SDK libraries.
how to run "docker image inspect test123.azurecr.io/hello-world:v3" using .NET SDK libraries by connecting AZURE Container Registry (ACR) ?
I have tried following packages, but I didn't see any support to get similar command using .NET Libraries.
https://www.nuget.org/packages/Microsoft.Azure.Management.ContainerService/
https://www.nuget.org/packages/Docker.DotNet/
https://github.com/dotnet/dotnet-docker
You can use the azure cli (az). It has a manifest command.
az acr manifest show --registry my-acr --name hello-world:123
Finally I got answer for my question. We can use Http V2 API capabilities to retrieve Manifest or Config information to retrieve.
To get manifest information below is the URL's
GET {url}/v2/{name}/manifests/{reference}
https://test.azurecr.io/v2/{imageName}/manifests/Sha:25635dfger4656454fggf
GET {url}/v2/{name}/blobs/{digest}
https://test.azurecr.io/v2/{imageName}/blobs/Sha:4534afdf33289988956565
Note: There are different digest's available for image. You will see one digest for entire Manifest and different digest for Config of Manifest section.
Please find below documentation
https://learn.microsoft.com/en-us/rest/api/containerregistry/manifests/get
https://learn.microsoft.com/en-us/rest/api/containerregistry/blob/get

Failed to pull image - Azure AKS

I have been following this guide to deploy application on Azure using AKS
Every thing was fine until I deployed, one node is in not ready state with ImagePullBackOff status
kubectl describe pod output
Performing below command I get success command, so I am sure authentication is happening
az acr login --name siddacr
and this command lists out the image which was uploaded
az acr repository list --name <acrName> --output table
I figured out.
The error was in the name of the image in deployment.yml file
imagebackpulloff might be caused because of the following reasons:
The image or tag doesn’t exist
You’ve made a typo in the image name or tag
The image registry requires authentication
You’ve exceeded a rate or download limit on the registry

How to deploy pgadmin4 docker image on azure web app?

I am unable to run docker image dpage/pgadmin4 on azure web app (Linux) which is available on docker hub.
I have installed Docker in my Linux machine and was able to run that docker image locally. Then I created Web app in Azure with options as given below:
OS: Linux
Publish: Docker Image
App service plan: Linux app service
After creating web app, I added two env variables in App Settings section:
PGADMIN_DEFAULT_EMAIL : user#domain.com
PGADMIN_DEFAULT_PASSWORD : SuperSecret
Finally login screen is visible but when I enter above credentials, it doesn't work and keeps redirecting to login page.
Update: If login is working properly, screen appears as shown below.
!(pgadmin initial screen)
After several retries i once got an message (CSRF token invalid) displayed in the right-top corner of the login screen.
For CSRF to properly work there must be some serverside state? So I activated the "ARR affinity" in the "General Settings" on the azure "Configuration".
I also noticed in the explamples on documentation the two environment-variables PGADMIN_CONFIG_CONSOLE_LOG_LEVEL (which is in the example set to '10') and PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION (which is in the example set to 'True').
After enabling "ARR" and setting PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION to False the login started to work. I have no idea what PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION is actually doing, so please take that with caution.
If thats not working for you, maybe setting PGADMIN_CONFIG_CONSOLE_LOG_LEVEL to 10 and enabling console debug logging can give you a clue whats happening.
For your issue, I do the test and find that it's really a strange thing. When I deploy the docker image dpage/pgadmin4 in Azure service Web App for Container through Azure CLI and set the app settings, there is no problem to log in with the user and password. But when I deploy it through the Azure portal, then I meet the same thing with you.
Not sure what is the reason, but the solution is that set the environment variables PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD through the Azure CLI like below:
az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PGADMIN_DEFAULT_EMAIL="user#domain.com" PGADMIN_DEFAULT_PASSWORD="SuperSecret"
If you really want to know the reason, then you can make feedback to Microsoft. Maybe it's a bug or some special settings.
Update
The screenshot of the test on my side here:

download files in azure vm using fileuri

"fileUris": [
"https://files.blob.core.windows.net/extensions/test.sh"]
In an Azure scale set, does this part of extension download the file test.sh to the VM or call it directly from blob storage?
I'm assuming you are talking about the custom script extension for Azure virtual machines.
On its documentation page it reads:
The Custom Script Extension downloads and executes scripts on Azure
virtual machines. This extension is useful for post deployment
configuration, software installation, or any other configuration /
management task. Scripts can be downloaded from Azure storage or
GitHub, or provided to the Azure portal at extension run time. The
Custom Script extension integrates with Azure Resource Manager
templates, and can also be run using the Azure CLI, PowerShell, Azure
portal, or the Azure Virtual Machine REST API.
Highlighted are the relevant parts.
The extension work so that it first downloads and then executes the scripts you provide for it.
Edit: If you need to deploy some external resources you can upload them to your GitHub account or an Azure Storage Blob and download/read them from there.
See for example this answer for more details on how to download a file from a blob.
Invoke-WebRequest -Uri https://jasondisk2.blob.core.windows.net/msi/01.PNG -outfile 'C:\'
If you simply want to read the json file, then you can do as described here in this other answer.
$response = Invoke-RestMethod -Uri "https://yadayada:8080/bla"
$response.flag
Note: Invoke-RestMethod automatically converts the json response to a psobject.
As for the working directory. The extension downloads its files into the following directory
C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n>
where <n> is a decimal integer which may change between executions of the extension. The 1.* value matches the actual, current typeHandlerVersion value of the extension.
For example, the actual directory could be
C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2
See the troubleshooting section in the Azure documentation for more information.
Alternatively, for a Linux based system the path is similar to
/var/lib/waagent/custom-script/download/0/
see this page for more information.

Clear an App Service instance and upload new content from a zip file

On App Service, what's the best way of deploying new content from a zip file, such that it replaces any existing content?
Please note:
I am running on linux
I cannot use msdeploy
I cannot use git
I cannot use VSTS
It needs to be simple
It cant be prone to timing out
It has to be supported by all subscription levels of App Service
Commands should only return after their respective operation(s) have completed
I have access to ARM templates
Provided it isn't as difficult, I'm sure I could upload files to storage blobs
For more information, see this discussion here: https://github.com/projectkudu/kudu/issues/2367
There is a solution that consists in calling the ARM msdeploy provider to deploy a cloud hosted zip package. This requires no msdeploy on your client, so the fact that msdeploy technology is involved is mostly an implementation detail you can ignore.
There are a couple gotchas that I will call out at the end.
The steps are:
First, get your zip hosted in the cloud. e.g. I have a test one here you can play around with: https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip (note that this zip uses special msdeploy packaging, but you can also use a plain old zip with just your files).
Then run the following command using cli 2.0, replacing your resource group, app name and zip url:
az resource update --resource-group MyRG --namespace Microsoft.Web --parent sites/MySite --resource-type Extensions --name MSDeploy --set properties.packageUri=https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip --api-version 2015-08-01
This will result in the package getting deployed to your wwwroot, and any existing content that's not in the zip getting deleted. It's efficient as it won't touch any files that already exist and are identical to what's in the zip. So it's far faster than trying to clean out everything and unzipping clean (but results are identical).
Now a couple gotchas:
Due to what seems like a bug in CLI 2.0, I wasn't able to pass a URL that contains an equal sign, which rules out SAS URLs. I'll report that to them. For now, test the process with a public zip, like my test package above.
The command line is more complex than it should be. I will also ask the CLI team about this.

Resources