Unable to import images from public registry - azure

I'm following this guide and when I try to import cert-manager images in my private ACR from command line I receive this error:
(InvalidParameters) Operation
registries-561d08e9-81e5-11ed-baec-f834415bade1 failed. Resource
/subscriptions/88ea9307-f11d-433e-88c5-7a48cbbfe2f4/resourceGroups/r0b0x/providers/Microsoft.ContainerRegistry/registries/r0b0x1
Error copying blobs. Error copying blobs. Error copying blobs.
Error copying blobs. Error copying blobs.
Seems that no one has encountered this error before. Using an azure account you can regenerate the same conditions starting from scratch:
az group create --name sandbox --location eastus
az acr create --resource-group sandbox --name test
# Declare few env variables to use after
ACR=test
REGISTRY=quay.io
IMAGE=jetstack/cert-manager-controller
TAG=v1.8.0
az acr import --name $ACR --source $REGISTRY/$IMAGE:$TAG --image $IMAGE:$TAG
Do you have any suggestion?
Even if fails, the last command (import) generate something inside my private ACR. If I try to list stored repositories I can see the previously created:
az acr repository list --output table
But if I try to use the image for a deployment or I try to delete it, Azure returns always a resource not found error message. I'm getting crazy with this issue!
What I'm doing wrong?

I too am having this issue. I am following the same Azure MSLearn guide at https://learn.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli
I worked through this guide in September, and this was not a problem. In fact I ran through it 2 times in September, and this az acr import step did not fail at all.
Note that HELM is not at all involved in this step - this is purely an AZ CLI operation.
I am executing az acr import while logged-in to AZ CLI as the Subscription OWNER, so I have necessary roles to import and delete images.
My experience is that after receiving the error message, I find some or all of the images are in the repository, but something is corrupt. The images cannot be deleted, and they cannot be pulled. Using Azure Portal to attempt to delete the imported repositories results in this dialog:
I am using the same version of AZ CLI as I used in September: 2.38
Here is the import script:
REGISTRY_NAME=myregistry
CERT_MANAGER_REGISTRY=quay.io
CERT_MANAGER_TAG=v1.8.0
CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG
running az acr import with --debug we can see some information
urllib3.connectionpool: https://management.azure.com:443 "GET /subscriptions/xxxxxxxxxx-8551-44e0-ae5b-xxxxxxxx/providers/Microsoft.ContainerRegistry/locations/CENTRALUS/operationResults/registries-xxxxxx-8737-11ed-a5ae-4074e04a4d5d?api-version=2021-08-01-preview HTTP/1.1" 400 315
. . .
cli.azure.cli.core.sdk.policies: Response content:
cli.azure.cli.core.sdk.policies: {"error":{"code":"InvalidParameters","message":"Operation registries-xxxxxxx-8737-11ed-a5ae-4074e04a4d5d failed. Resource /subscriptions/xxxxxxxxxxxx-8551-44e0-ae5b-xxxxxxxxx/resourceGroups/rg-workflowsaas-nodejs/providers/Microsoft.ContainerRegistry/registries/myregistry Error copying blobs."},"status":"Failed"}
cli.azure.cli.core.util: azure.cli.core.util.handle_exception is called with an exception:
cli.azure.cli.core.util: Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/base_polling.py", line 517, in run
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/base_polling.py", line 553, in _poll
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/base_polling.py", line 595, in update_status
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/base_polling.py", line 114, in _raise_if_bad_http_status_and_method
azure.core.polling.base_polling.BadStatus: Invalid return status 400 for 'GET' operation
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 663, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 697, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 333, in __call__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/acr/import.py", line 110, in acr_import
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 1013, in __call__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 1000, in __call__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/_poller.py", line 255, in result
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/tracing/decorator.py", line 73, in wrapper_use_tracer
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/_poller.py", line 275, in wait
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/_poller.py", line 192, in _start
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/polling/base_polling.py", line 521, in run
azure.core.exceptions.HttpResponseError: (InvalidParameters) Operation registries-xxxxxxx-8737-11ed-a5ae-4074e04a4d5d failed. Resource /subscriptions/xxxxxx-8551-44e0-ae5b-xxxxxx/resourceGroups/rg-workflowsaas-nodejs/providers/Microsoft.ContainerRegistry/registries/myregistryError copying blobs.
Code: InvalidParameters
Message: Operation registries-xxxxxxx-8737-11ed-a5ae-4074e04a4d5d failed. Resource /subscriptions/xxxxxxxx-8551-44e0-ae5b-xxxxxx/resourceGroups/rg-workflowsaas-nodejs/providers/Microsoft.ContainerRegistry/registries/myregistryError copying blobs.
I found this Q&A from MSLearn which attributes this to quay.io not supporting range blob operations which are used by acr import. It goes on to suggest manually downloading and then pushing into ACR.
But I will repeat, this worked in September, using the same versions of clients. Seems like something broke recently? Anyway this seems to be the answer.
https://learn.microsoft.com/en-us/answers/questions/1136080/unable-to-import-image-to-container-registry.html

There was an answer for this posted in learn.microsoft.com, acknowledging this is a recently evolved issue - perhaps quay recently stopped supporting range operations - and suggesting to import the packages into your own docker registry and deploying from there. Prior to Summer/Fall of 2022, az acr import was able to import from quay.
It's not an issue with Helm or wth permissions; the issue is that az acr import is not compatible with quay's Api which does not support range operations that az acr import uses.
https://learn.microsoft.com/en-us/answers/questions/1136080/unable-to-import-image-to-container-registry

ACR Import needs authentication implicitly, provide username and password values as mentioned below.
Enable Admin User at Access Key level
Login ACR[destination]
az acr login -n <container registry name> --expose-token
Commands to copy the image.
$source = “Source Container”
$imageTag=“Image”
$destination="Destination"
$username= “Src username”
$password= "passw"
az acr login -n --expose-token
az acr import --name destination --source "destination−−source"[Source](http://source.azureacr.io)./$imageTag" --username $username --password $password
Note: Need to provide the username and password values implecitly to acr import command along with image tag. Grab them from step1 screen.
refer

Related

How to remove a file ending in a dot . from a Azure File Share?

We somehow managed to create a file in an Azure File Share whose name ends in a . (dot) (the file name ends in . not the share name :) ).
We now cannot retrieve, remove, edit that file. Whenever we try to perform any action we get:
Extension
Microsoft_Azure_FileStorage
Content
FilePropertiesBladev2
Error code
404
Is there anyway we can remove this file using Powershell, Azure CLI, etc.?
Thanks
I tried in my environment and got below results:
Status Code: 404 - error
The above error indicates mostly File share or file present in the file share not found in the file storage.
According to this MS-DOCS check whether the file is locked for editing by another user also check if the file is open in another program.
Is there anyway we can remove this file using Powershell, Azure CLI, etc.?
I tried in powershell and removed the .files successfully.
Initially I have some files in fileshare.
Portal:
Commands:
$sharename = "fileshare1"
$foldername = "directory1"
$accountname = "storage326123"
$accountkey = "<storage account key>"
az storage file delete --account-name $accountname --account-key $accountkey --share-name $sharename --path directory1/1..json
Response:
Portal:

"az connectedk8s connect" has error "Problem loading the kubeconfig file.module 'collections' has no attribute 'Hashable'"

I am trying to connect my non aks k8s cluster to azure arc. I want to attempt to do this entirely through the cli. Looking at the quickstart-connect-cluster guide it skips right from resource group creation to the az connectedk8s connect step.
When attempting to connect to my cluster currently I get the following error:
$ az connectedk8s connect --name $STACK_NAME --resource-group $STACK_NAME --location eastus --tags Datacenter=miami-lab City=Miami StateOrDistrict=Florada CountryOrRegion=USA
This operation might take a while...
Problem loading the kubeconfig file.module 'collections' has no attribute 'Hashable'
I believe I may need to run some other az command to create any resources I may be missing under https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/allresources
Am I missing some other resources I need to create before running the above command? If so, what is the az command needed to create these missing resrouces?

az acr build requored <SOURCE_LOCATION> parameter

I'm following the directions on the official documentation
While executing a command:
az acr build --registry <container_registry_name> --image webimage
I'm receiving
the following arguments are required: <SOURCE_LOCATION>
But per the documentation, <SOURCE_LOCATION> is not a required parameter.
Has anyone encountered such a case?
I went to the root folder of the application and added a point at the end of the command. In such a way I passed the location where the code exists (<SOURCE_LOCATION>).
az acr build --registry <container_registry_name> --image webimage .
Seems this issue had been already reported , however you could resolve this by passing the location where the code exists

az storage container list. doesnt work, referencing deleted storage

I am following this tutorial, running az cli(v 2.11) on my MacOS locally:
https://learn.microsoft.com/en-us/learn/modules/provision-infrastructure-azure-pipelines/6-run-terraform-remote-storage
after following a few steps including this one:
az storage account create --name tfsa$UNIQUE_ID --resource-group tf-storage-rg --sku Standard_LRS
and have run this command:
az storage container list --query "[].{name:name}" --output tsv
i receive the following:
HTTPSConnectionPool(host='mystorageaccount20822.blob.core.windows.net', port=443): Max retries exceeded with url: /?comp=list&maxresults=5000 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x10d2566a0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
The above command works in cloud shell, but fails in my local shell (running v 2.20, up to date)
on cloud shell i do get this warning though:
There are no credentials provided in your command and environment, we
will query for the account key inside your storage account. Please
provide --connection-string, --account-key or --sas-token as
credentials, or use --auth-mode login if you have required RBAC
roles inyour command. For more information about RBAC roles in
storage, visit
https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli.
I had previously created a mystorageaccount20822 a couple weeks ago but deleted it... my AZ CLI is still bound to this previous account? Is there a way to tell my az cli (on mac) to sync up with the current resources i have running. In Azure Portal mystorageaccount20822 does NOT exist.
Does Azure CLI cache some values or something? is there some hidden config file that has the old 'mystorageaccount20822' set and the CLI is trying to reference that each time instead of the new account named tfsa$UNIQUE_ID ?
After running the command with debug:
az storage container list --debug --account-name tfsa$UNIQUE_ID --query [].name --output tsv
I was able to see that it was setting it.
It turns out it had set the environment variable 'AZURE_STORAGE_CONNECTION_STRING' from a tutorial a few days ago, which was overriding a property when the command was sent, to use an old examples value. After unsetting that environment variable, the command worked.

Azure function app create

I´m using az functionapp create for creating function ap in Azure, where apparts of creating the function app it also hooks it to a bitbucket repo. I´m using parametere --deployment-source-url -u but it seems is not working this way and is giving me an error. This is done by a jenkin file pipeline
node {
stage('Azure Login') {
withCredentials([azureServicePrincipal('6-8afd-ae40e9cf1e74')]) {
sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
sh 'az account set -s $AZURE_SUBSCRIPTION_ID'
}
}
stage('Build Azure FuntionApp') {
sh 'az functionapp create -g $RG_NAME -p $SP_NAME -n grey-$JOB_NAME-$BUILD_NUMBER -s $SA_NAME --deployment-source-url https:// bitbucket.org/xxxx/functions/s***strong text***rc/develop --debug'
}
If I put --deployment-source-url -u https://user#bitbucket.org I get:
ERROR: az functionapp create: error: argument
--deployment-source-url/-u: expected one argument
I tried without the -u just : --deployment-source-url https://#bitbucket.org
and the job gets done, but the link with bitbucket repos is not made. Getting this:
So how is it that this work? how come if I put user it says invalid argument and if I don´t it pases but It can find user. Does anyone ever used this command to create a function app? thanks!
If you want to create azure function via azure-cli, you could change the deployment resource url after --deployment-source-url. You could refer to my command to create a function with a blob trigger, replace the url of yours. It works fine on my side.
Note: The Access level should be public, you could check it in Settings like the screenshot below.
az functionapp create --deployment-source-url https://bitbucket.org/xxx/azure-function --resource-group resourcegroupname --consumption-plan-location westeurope --name joyfun22 --storage-account <storage_name>
Besides, you also can use a github repository to create a function.
For example, to use the command below to create a function with a blob trigger.
az functionapp create --deployment-source-url https://github.com/Joyw1/Azure-Function-Trigger --resource-group myResourceGroup --consumption-plan-location westeurope --name <app_name> --storage-account <storage_name>
Update:
If your Access level is private. You need a access token to access your bitbucket repository. Please follow the steps bellow.
1.Go to the Bitbucket Labs -> Access Management -> OAuth -> Add consumer
More details, refer to this link.
2.Enable authenticated git deployment with Azure CLI
#!/bin/bash
gitrepo=<Replace with your GitHub repo URL e.g. https://github.com/Azure-Samples/functions-quickstart.git>
token=<Replace with a GitHub access token>
# Enable authenticated git deployment
az functionapp deployment source update-token \
--git-token $token
For complete command, refer to this link.

Resources