Azure Container Registry throws exception when building - azure

I'm trying to follow the tutorials at MS Learn about Containers in Azure. I'm trying to push a docker image to my recently created azure container registry I'm using the az acr build command in the Azure CLI, I'm also using a docker file. After running the command I get this message in the console "Packing source code into tar to upload..." then after a couple of minutes I get this:
[WinError 5] Access is denied: '.\\AppData\\Local\\Application Data'
I did a little research and it's supposed that's a junction folder in Win 10 which means it only exists for backwards compatibility, it only redirects you to the new location Microsoft uses.
Anyone happen to have had this error?
I also tried to modify permissions to the Application Data folder but no matter which permissions it has it still throws the same exception.
UPDATE
The tutorial I'm trying to follow along is in this link.
I also tried to use the --verbose flag of the az acr build command and the error is being thrown in this file ```cli.azure.cli.core.util````. I looked over the azure cli github project and found the file but I'm not as good developer to figured out what's going on.

Related

az managedapp definition create: DownloadItemFromBlobFailed due to a failed connection

I want to create an Azure "Managed App" definition, in preparation for making an Azure Marketplace offering. I am following these MS instructions, and I had specifically been using this MS example managed app. There were was an error(s) in the documentation, which I posted to the MS team (along with my proposed fixes). Nevertheless, I did get the MS example working!
My next step was to replace the original MS sample deployment bundle...
https://raw.githubusercontent.com/Azure/azure-managedapp-samples/master/Managed%20Application%20Sample%20Packages/201-managed-storage-account/managedstorage.zip
...with my own deployment bundle...
https://github.com/brentarias/azureStaticEmpty/raw/master/baselinepocapp.zip
This didn't work. When issuing the az managedapp definition create command, I received the following error:
(DownloadItemFromBlobFailed) Download of the item from blob at 'https://github.com/brentarias/azureStaticEmpty/raw/master/baselinepocapp.zip' failed due to a failed connection.
Code: DownloadItemFromBlobFailed
Message: Download of the item from blob at 'https://github.com/brentarias/azureStaticEmpty/raw/master/baselinepocapp.zip' failed due to a failed connection.
It makes no sense to have a "connection" error, so I assumed that the REAL error was something inside of my deployment bundle. To test that theory, I copied the original MS sample bundle to a variety of places that I control, including Azure BLOB storage. One example location I placed the copied MS deployment file was here:
https://github.com/brentarias/azureStaticEmpty/raw/master/managedstorage.zip
When using this latter URL, I still received the same "connection" error.
In short, the only way for me to bypass the "connection" error is if I use the original sample MS deployment, from the original path that MS supplied. Incidentally, I also tried a variant URL of the original MS sample:
https://github.com/Azure/azure-managedapp-samples/raw/master/Managed%20Application%20Sample%20Packages/201-managed-storage-account/managedstorage.zip
Suddenly the deployment works! However, that location is still the original MS-owned repo "azure-managedapp-samples". This simply seems to confirm that if the deployment does not come from a MS-owned repo, I am then unable to make the deployment.
What am I doing wrong?
Update 2/3/2023
I finally found a way to make this work! When using an Azure storage account, simply having a publicly visible URL for the deployment bundle is insufficient. I need to have a "shared access signature" URL for that deployment bundle...and then the az managedapp definition create command works!
However, my overall question still is unanswered:
What are the valid file-share platforms that the az managedapp definition create supports? Besides github and Azure BLOB, what else?
What exact configuration do I need to make with a github raw link, before it is considered "kosher" by the managedapp definition create?

Set up deployment to app service using personal access token

I've been given a personal access token (full access) which allows me to connect to a private Azure git repo within an Azure devops account from another subscription. Connecting to that repo locally using git is working fine.
I would like to set this up as a CI/CD deployment source for my app service but have been unable to find out how to do this. I tried Azure CLI:
az webapp deployment source config ... --repo-url https://anything:{pat}#dev.azure.com/Company/Project/_git/Reponame
This fails with a 500 error.
So I tried calling the Rest API directly but that also fails with the 500 error so not an Azure CLI issue.
Hoping someone can point me in the right direction,
Thanks for the help, much appreciated

Azure pipeline 'WinRMCustomScriptExtension' underlying connection was closed in non-public VM

In Azure pipeline when creating a VM through deployment template, we have the option to 'Configure with WinRM agent' as given below.
This acts as a custom extension behind the scenes. But the downloading of this custom extension can be blocked by an internal vnet in Azure. This is the error we are getting.
<datetime> Adding extension 'WinRMCustomScriptExtension' on virtual machine <vmname>
<datetime> Failed to add the extension to the vm: <vmname>. Error: "VM has reported a failure when processing extension 'WinRMCustomScriptExtension'. Error message: \"Failed to download all specified files. Exiting. Error Message: The underlying connection was closed: An unexpected error occurred on a send.\"\r\n\r\nMore information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot "
Since the files cannot be downloaded, I am thinking of a couple of solutions:
How can I know which powershell files azure is using to setup winrm?
Location to store files would be storage account (same vnet as VM)
Perhaps not use WinRM at all and use custom script extension to resolve
everything (with all files from storage account). I hope error from extension stops the pipeline if it happens.
Is there a better solution to resolve this? To me it looks like a bad design by azure as it is not covering non-public VMs.
EDIT:
Found answer to #1) https://aka.ms/vstsconfigurewinrm. This was shown in Raw logs of the pipeline when diagnostics were enabled
Even if you know - how does it help you? It won't be able to download them anyway and you cant really tell it to use local files
If you enable service endpoins and allow your subnet to talk to the storage account - it should work
there is a way to configure WinRM when you create the VM. Keyvault example
You could use script extension like you wanted to as well, but script extension has to download stuff to the Vm as well. Example

Application fails to start when using Azure website package deployment

We've recently switched to using Azure package deployment for our sites (https://github.com/Azure/app-service-announcements/issues/84) - and it's great! It's a great feature which has radically simplified our deployments. However we have a second site which will not run when packaged (but does run when not packaged).
We followed the standard procedure for setting a site to run from a package;
created the folder /data/SitePackages from ftp,
drop the package in there along with the packagename.txt file
set the App config setting WEBSITE_RUN_FROM_PACKAGE=1
However we receive "You do not have permission to view this directory or page." on the homepage, and any other subsequent page we receive "The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.". It's as though the site isn't loading the package at all? Azure Log stream show's "HTTP Error 401.3 - Unauthorized" on the home page and a standard 404 for anything else.
From the Azure portal, if i click console and ls to see a directory listing of the files it thinks it's running all I see is a single file;
FAILED TO DOWNLOAD ZIP FILE.txt
Turns out this was the azure portal not loading the package. Looking at our TeamCity setup, the step which generates the "packagename.txt" file has a typo, so the zip file Azure was trying to load did not exist.

Clear an App Service instance and upload new content from a zip file

On App Service, what's the best way of deploying new content from a zip file, such that it replaces any existing content?
Please note:
I am running on linux
I cannot use msdeploy
I cannot use git
I cannot use VSTS
It needs to be simple
It cant be prone to timing out
It has to be supported by all subscription levels of App Service
Commands should only return after their respective operation(s) have completed
I have access to ARM templates
Provided it isn't as difficult, I'm sure I could upload files to storage blobs
For more information, see this discussion here: https://github.com/projectkudu/kudu/issues/2367
There is a solution that consists in calling the ARM msdeploy provider to deploy a cloud hosted zip package. This requires no msdeploy on your client, so the fact that msdeploy technology is involved is mostly an implementation detail you can ignore.
There are a couple gotchas that I will call out at the end.
The steps are:
First, get your zip hosted in the cloud. e.g. I have a test one here you can play around with: https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip (note that this zip uses special msdeploy packaging, but you can also use a plain old zip with just your files).
Then run the following command using cli 2.0, replacing your resource group, app name and zip url:
az resource update --resource-group MyRG --namespace Microsoft.Web --parent sites/MySite --resource-type Extensions --name MSDeploy --set properties.packageUri=https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip --api-version 2015-08-01
This will result in the package getting deployed to your wwwroot, and any existing content that's not in the zip getting deleted. It's efficient as it won't touch any files that already exist and are identical to what's in the zip. So it's far faster than trying to clean out everything and unzipping clean (but results are identical).
Now a couple gotchas:
Due to what seems like a bug in CLI 2.0, I wasn't able to pass a URL that contains an equal sign, which rules out SAS URLs. I'll report that to them. For now, test the process with a public zip, like my test package above.
The command line is more complex than it should be. I will also ask the CLI team about this.

Resources