Using Azure Container Registry creating new Azure Container Instance from C# - azure

I have created a Azure Container Registry, and uploaded a custom ACI to the registry - no problem - everything works as intended. I have tried creating an container instance from the image using the Azure Portal, and no problems there either - however - when I want to automate things using C# with the Microsoft Azure Management Container Instance Fluent API, I run into problems, and even though I feel like I have been all over the Internet and the settings, looking for hidden obstructions, I haven't been able to find much help.
My code is as follows:
var azureCredentials = new AzureCredentials(new
ServicePrincipalLoginInformation
{
ClientId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
ClientSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}, "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
AzureEnvironment.AzureGlobalCloud);
var azure = Azure
.Configure()
.Authenticate(azureCredentials)
.WithSubscription("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
IContainerGroup containerGroup = azure.ContainerGroups.Define("mytestgroup")
.WithRegion(Region.EuropeWest)
.WithExistingResourceGroup("mytest-rg")
.WithLinux()
.WithPrivateImageRegistry("mytestreg.azurecr.io", "mytestreg", "xxxxxxxxxxxxxx")
.WithoutVolume()
.DefineContainerInstance("mytestgroup")
.WithImage("mytestimage/latest")
.WithExternalTcpPort(5555)
.WithCpuCoreCount(.5)
.WithMemorySizeInGB(.5)
.Attach()
.Create();
The above code keeps giving me the exception:
Microsoft.Rest.Azure.CloudException: 'The image 'mytestimage/latest' in container group 'mytestgroup' is not accessible. Please check the image and registry credential.'
I have tried a couple of things;
Testing the credentials with docker login - no problem.
Pulling the image with docker pull mytestreg.azurecr.io/mytestimage - no problem.
Swapping WithPrivateImageRegistry with WithPublicImageRegistryOnly and just using debian in WithImage - works as intented - no problem.
Leaving the latest tag out of the image name - still doesn't work.
I have no idea why the credentials for the private Registry won't work - I have been copy/pasting directly from the Azure Portal to avoid typos, tried typing in manually etc.
Using Fiddler to inspect the traffic doesn't reveal anything interesting, other than the above exception message is returned directly from the Azure Management API.
What is the obvious thing that I am missing?

The answer above (ie using full azure registry server name):
.WithImage("mytestreg.azurecr.io/mytestimage:latest")
seems to be part of the solution, but even with that change I was still seeing this error. Looking through other examples on the web (https://github.com/Azure-Samples/aci-dotnet-create-container-groups-using-private-registry/blob/master/Program.cs) containing what I needed, I changed my azure authentication from:
azure = Azure.Authenticate(authFilePath).WithDefaultSubscription();
to:
AzureCredentials credentials = SdkContext.AzureCredentialsFactory.FromFile(authFilePath);
azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
and with THAT change, things are now working correctly.

I had the same problems for the last couple of weeks and I've finally found a solution. You should add your azure registry server name in front of the image name. So following your example change:
.WithImage("mytestimage/latest")
To:
.WithImage("mytestreg.azurecr.io/mytestimage:latest")
At least that did it for me, I hope it helps someone else.

Related

Unable to deploy Azure Function App - error with storage account

Lately I've had trouble with deploying a Function App via Azure CLI. Last week on Tuesday, I was still able to deploy a Function App via Azure CLI.
This week, like any other day before that, I used fairly common Azure Function Tools command func azure functionapp publish. The version of Azure Function Tools I am using is 3.0.3233.
Now I am getting this error every time:
Retry: 1 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
Retry: 2 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
Retry: 3 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
I checked that AzureWebJobsStorage setting has a correct value, I even connected to storage account connection string via Azure Storage Explorer app.
Just in case, I created a new Function App in another region and I still get the same error.
Has anyone else encountered this error? I suspect this is an error in the tool itself, maybe a faulty build?
I suspect that AzureWebJobsStorage is not present/invalid in App Settings section of the function app in the Azure portal.
Make sure that it is added there and you are not deleting those settings through CLI/templates and recreating them without AzureWebJobsStorage.
I answer to my own question. It seems that this was a transient error. Without changing any code, today I was able to redeploy my function app. Cheers.
If you don't have "Allow storage account key access" enabled , you get this error.
There could be other scenarios as well. But the error does not say anything .

Azure form recognizer app invalid resource name

I'm traying to daploy an instance of the form recognizer app in Azure. For that I'm following the instructions in the documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/deploy-label-tool
I have created the docker instance and the connection, but the step to create the APP is failing.
This are the parameters I'm using:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And this is the error and getting:
I had the same error. It turned out to be due to incorrect SAS URI formatting because I generated and copied the SAS token via the Storage Accounts interface. It's much easier to get the correct format for the SAS URI if you generate it through the Storage Explorer (currently in Preview) as opposed to through the Storage Accounts.
If you read the documentation carefully it gives you a step by step guide
"To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select Get shared access signature. Set the expiry time to some time after you'll have used the service. Make sure the Read, Write, Delete, and List permissions are checked, and click Create. Then copy the value in the URL section. It should have the form: https://.blob.core.windows.net/?"
Form Recognizer Documentation
The error messages point to a configuration issue with the AzureBlobStorageTemplate Thing. Most likely the containerName field for the Blob Storage Thing is empty or contains invalid characters
Ensure the containerName is a valid Azure storage container name.
Check https://learn.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata for more information.
A container name must be a valid DNS name
The Connector loads and caches all configuration settings during startup. Any changes that you make to the configuration when troubleshooting are ignored until the Connector is restarted.
When creating the container connection, you must add the container into the SAS URI, such as
https://<storage-account>.blob.core.windows.net/<Enter-My-Container-Here>?<SAS Key>
You can also directly use the open source labeling tool, please see the section further down in the doc:
The OCR Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see OCR Form Labeling Tool.

Can docker on Azure Linux App Service authenticate with the ACR without us specifying the password in the app settings?

We deploy a Linux App Service to Azure using terraform. The relevant configuration code is:
resource "azurerm_app_service" "webapp" {
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.ctx.AcrName}.azurecr.io"
DOCKER_REGISTRY_SERVER_USERNAME = data.azurerm_key_vault_secret.acr_admin_user.value
DOCKER_REGISTRY_SERVER_PASSWORD = data.azurerm_key_vault_secret.acr_admin_password.value
...
}
...
}
The problem is that terraform does not consider app_settings a secret and so it outputs in the clear the DOCKER_REGISTRY_SERVER_PASSWORD value in the Azure DevOps output (I obfuscated the actual values):
So, I am wondering - can docker running on an Azure Linux App Service host authenticate with the respective ACR without us having to pass the password in a way that makes it so obvious to every one who can inspect the pipeline output?
The following article seems relevant in general - https://docs.docker.com/engine/reference/commandline/login, but it is unclear how we can apply it in my context, if at all.
Also, according to https://feedback.azure.com/forums/169385-web-apps/suggestions/36145444-web-app-for-containers-acr-access-requires-admin#%7Btoggle_previous_statuses%7D Microsoft has started working on something relevant, but looks like this is still a work in progress (almost 5 months).
I'm afraid you must set the environment variables about DOCKER_REGISTRY_* to pull the images from the ACR, it's the only way to do that designed by Azure. But for the sensitive info about the password, it also provides a way to hide it. You can use the Key Vault to store the password in secret, and then get the password from the secret. Take a look at the document Use Key Vault references for App Service. So you can change the app_setting for the password like this:
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)"
Or
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret;SecretVersion=ec96f02080254f109c51a1f14cdb1931)"
Then it just shows the reference of the Key Vault, not the exact password.
Unfortunately Azure Web Apps do not support interacting with ACR using a managed identity, you must pass those Environment Variables to the App Service.
Terraform does not currently support applying a "sensitive" flag to arbitrary values. You can define outputs as sensitive, but it will not help with values you want to hide during the plan phase.
I would suggest checking out https://github.com/cloudposse/tfmask, using the TFMASK_RESOURCES_REGEX configuration to block the output you want to hide during your pipeline. If you're averse to adding dependencies, similar effect could be achieved by piping terraform apply through grep --invert-match "DOCKER_REGISTRY" instead.
#charles-xu has a good answer as well if you want to set up mappings between keyvault and your web app then push your tokens into kv secrets.
Now it's possible to use managed identity to pull images from ACR.
You may do the next:
go to your Container Registry page in the Azure portal
Open the tab Access Control (IAM)
The open Role assignments tab
Add role assignment AcrPull to your App Service or Function App
In the Deployment Center of your App Service choose Managed Identity for the Authentication setting.
Or you may use CLI by following the steps from the official documentation (link below):
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#use-managed-identity-to-pull-image-from-azure-container-registry
After you added role assignment DOCKER_REGISTRY_SERVER_URL, DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD settings may be removed from App Service's App Settings.

Azure Container Registry in Azure Web App for Containers across subscriptions

I'm currently trying to set up an Azure Web App for Containers, linking it to a Azure Container Registry that lives inside a different subscription. That's why my initial thought was to use the Private Registrytab inside the Web apps Container Settings to enter the credentials of said Registry.
However when I save and reload the page the settings of the Azure Container Registry tab are now populated and the Private Registry tab is empty. The issue is, that I get now get following error:
2020-01-21 21:51:12.951 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"pull access denied for cliswebapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
I assume because no password was stored. How do I configure this properly?
While you use the private registry, the Azure Container Registry is also a private registry, and deploy to Web App for Containers, you need to set the environment variables here:
DOCKER_REGISTRY_SERVER_USERNAME - The username for the ACR server.
DOCKER_REGISTRY_SERVER_URL - The full URL to the ACR server. (For example, https://my-server.azurecr.io.)
DOCKER_REGISTRY_SERVER_PASSWORD - The password for the ACR server.
See more details in If you're using Azure Container Registry, you need to set some app settings.
And if you create multiple containers, all the images must be in the same registry. All in Docker Hub or Azure Container Registry. See more details in All images must use the same registry.
Update:
With the message that you deploy the Web App using the image in the ACR in a different subscription. It seems it's a bug in Web App and you can see the issue in the Github. And the suggestion is that maybe you can use the service principal for the ACR to authenticate and the steps here.
I have spend some time on this issue and figured it out. Here is my solution:
Assuming we are having two subscriptions, let's call them SUB-A and SUB-B, where we are having an Azure Container Registry in SUB-A (called azurebluedev in my example).
Now we'd like to create an App Service in SUB-B that pulls its image of our container registry by using the admin username.
It's critical that you use the correct format under Image and tag in the docker blade when creating the app service. It must follow the format url/image:tag (without https) otherwise you will run into the described problem. I was using image:tag format beforehand which didn't work.
This worked for me!

Do I need an Azure Storage Account to run a WebJob?

So I'm fairly new to working with Azure and there are some things I can't quite wrap my head around. One of them being the Azure Storage Account.
My web jobs keeps stopping with the following error "Unhandled Exception: System.InvalidOperationException: The account credentials for '[account_name]' are incorrect." Understanding the error however is not the problem, at least that's what I think. The problem lies in understanding why I need an Azure Storage Account to overcome it.
Please read on as I try to take you through the steps taken thus far. Hopefuly the real question will become more clear to you.
In my efforts to deploy a WebJob on Azure we have created the following resources so far:
App Service Plan
App Service
SQL server
SQL database
I'm using the following code snippet to prevent my web job from exiting:
JobHostConfiguration config = new JobHostConfiguration();
config.DashboardConnectionString = null;
new JobHost(config).RunAndBlock();
To my understanding from other sources the Dashboard connection string is optional but the AzureWebJobsStorage connection string is required.
I tried setting the required connection string in portal using the configuration found here.
DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey
Looking further I found this answer that clearly states where I would get the values needed, namely an/my missing Azure Storage Account.
So now for the actualy question: Why do I need an Azure Storage Account when I seemingly have all the resources I need place for the WebJob to run? What does it do? Is it a billing thing, cause I thought we had that defined in the App Service Plan. I've tried reading up on Azure Storage Accounts over here but I need a bit more help understanding how it relates to everything.
From the docs:
An Azure storage account provides resources for storing queue and blob data in the cloud.
It's also used by the WebJobs SDK to store logging data for the dashboard.
Refer to the getting started guide and documentation for further information
The answer to your question is "No", it is not mandatory to use Azure Storage when you are trying to setup and run a Azure web job.
If you are using JobHost or JobHostConfiguration then there is indeed a dependency for Storage accounts.
Sample code snippet is give below.
class Program
{
static void Main()
{
Functions.ExecuteTask();
}
}
public class Functions
{
[NoAutomaticTrigger]
public static void ExecuteTask()
{
// Execute your task here
}
}
The answer is no, you don't. You can have a WebJob run without being tied to an Azure Storage Account. Like Murray mentioned, your WebJob dashboard does use a storage account to log data but that's completely independent.

Resources