BlobServiceClient object has no attribute `exists` - azure

I have created an azure pipeline. Added a task to download the file from blobStorage.
But I am getting the following error:
ERROR: The command failed with an unexpected error. Here is the traceback:
ERROR: 'BlobServiceClient' object has no attribute 'exists'
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 658, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 721, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 713, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/init.py", line 385, in new_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/init.py", line 385, in new_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/_exception_handler.py", line 17, in file_related_exception_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 692, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 328, in call
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/operations/blob.py", line 363, in storage_blob_download_batch
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 16, in collect_blobs
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 16, in
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 31, in collect_blob_objects
AttributeError: 'BlobServiceClient' object has no attribute 'exists'
To open an issue, please run: 'az feedback'
##[error]PowerShell exited with code '1'.\
Inline script written in Task:
az storage blob download-batch --destination $(build.sourcesDirectory) --pattern $(jmxfile) -s $(jmeter-storagecontainer) --account-name $(az-storageaccount) --account-key '$(az-accountkey)' --connection-string '$(az-connstring)'
I have verified all the variable values are correct & the jmxfile pattern is also correct.
Any idea, why getting this BlobServiceClient Object has no attribute 'exists' error?

The error "BlobServiceClient Object has no attribute 'exists'" usually occurs if you are using az cli latest version and executing az storage blob download-batch command.
To resolve the error, try using az storage blob download as a workaround.
Otherwise, try installing the previous version by uninstalling Azure cli latest version.
Make sure to delete all the dependencies of latest version while doing the above step.
Please note that --pattern parameter only supports four cases
Please note that there is a bug dealing with full blob name in latest CLI.
Please check the below GitHub blog which confirms the above issue:
Latest az cli fails to run download-batch command · Issue #21966 · Azure/azure-cli · GitHub

Related

Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate

I installed django cookiecutter in Ubuntu 20.4
with postgresql when I try to make migrate to the database I get this error:
python manage.py migrate
Traceback (most recent call last): File "manage.py", line 10, in
execute_from_command_line(sys.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 381, in execute_from_command_line
utility.execute() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 323, in run_from_argv
self.execute(*args, **cmd_options) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 361, in execute
self.check() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 387, in check
all_issues = self._run_checks( File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py",
line 64, in _run_checks
issues = run_checks(tags=[Tags.database]) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/registry.py",
line 72, in run_checks
new_errors = check(app_configs=app_configs) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/database.py",
line 9, in check_database_backends
for conn in connections.all(): File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 216, in all
return [self[alias] for alias in self] File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 213, in iter
return iter(self.databases) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/utils/functional.py",
line 80, in get
res = instance.dict[self.name] = self.func(instance) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 147, in databases
self._databases = settings.DATABASES File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 79, in getattr
self._setup(name) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 66, in _setup
self._wrapped = Settings(settings_module) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 176, in init
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY
setting must not be empty.
I did the whole instructions in cookiecutter docs and createdb what is the wrong?
Python libraries are so many and to make things simple and to enable the code to be re-usable, modules call each other. First of all, don't be scared on seeing such a big error. It is only a traceback to the error, as one code calls the other, which calls the other. To debug any such problem, it's important to see the first and last .py file names. In your case, the nesting in the traceback is like this:
Traceback Flowchart
So, the key problem for you is The SECRET_KEY setting must not be empty.
I would recommend putting the secret key under the "config/.env" file, as mentioned here:
https://wemake-django-template.readthedocs.io/en/latest/pages/template/django.html#secret-settings-in-production
Initially, you should find the SECRET_KEY inside the setting.py file of the project folder. But it needs to be inside .env file in production/LIVE environment. And NEVER post the SECRET_KEY of live environments on github or even here, as it's a security risk.
Your main problem is very clear in the logs.
You need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.

How to add preAuthorizedApplications using CLI 2.x

In Azure AD, under the expose an API section, I'm looking to automate the registration of an API and web app using CLI 2.x. I've looked through documents here but find nothing that addresses preAuthorizedApplications. Searching has yielded only information for legacy support. Where is the CLI 2.x support for setting preAuthorizedApplications data?
When populated via the portal UI, the manifest contains the relevant information
"preAuthorizedApplications": [
{
"appId": "d22xxxxxxx",
"permissionIds": [
"ef92yyyyyy"
]
}
],...
Is this something that can be inserted into the manifest directly? Any reference to documents or samples would be greatly appreciated.
Edit: An attempt to write the property with a null value fails with error "A value without a type name was found and no expected type is available...."
az ad app update --id $appId --set preAuthorizedApplications='[]'
If I show the app properties, I see preAuthorizedApplications in the list with a null value
az ad app list --display-name $appName
So it doesn't appear that this property can be injected into the manifest for some reason.
#joy-wang's excellent answer put me on track but still took hours to get it right.
no longer using /beta/, using 1.0
permissionIds now called delegatedPermissionIds now
specification of headers seemed different style now. When specified as Joy did, I got [1]
Echoing what Joy says, yes you need to be really careful about quotes. I did lots of experiments on the wrong things before realizing that I needed double quotes around the body and single quotes internally on values, the other way round gave errors [2]
The following worked:
$permsJson = az ad sp show --id $apiApplicationId --query 'oauth2Permissions[].{Value:value, Id:id, UserConsentDisplayName:userConsentDisplayName}' -o json
$permsHash = $permsJson | ConvertFrom-Json
$permId = $permsHash.Id #in my case that app only had one permission, you may need to do differently
$apiObjectId = az ad app show --id $apiApplicationId --query objectId
az rest `
--method PATCH `
--uri "https://graph.microsoft.com/v1.0/applications/$apiObjectId" `
--headers 'Content-Type=application/json' `
--body "{api:{preAuthorizedApplications:[{appId:'$preAuthedAppApplicationId',delegatedPermissionIds:['$permId']}]}}"
[1]
The command failed with an unexpected error. Here is the traceback:
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/util.py", line 510, in shell_safe_json_parse
File "json_init_.py", line 367, in loads
File "json\decoder.py", line 339, in decode
File "json\decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/util.py", line 516, in shell_safe_json_parse
File "ast.py", line 85, in literal_eval
File "ast.py", line 66, in _convert
File "ast.py", line 65, in
File "ast.py", line 77, in _convert
File "ast.py", line 84, in _convert
ValueError: malformed node or string: <_ast.Name object at 0x04765050>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/util.py", line 807, in send_raw_request
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/util.py", line 521, in shell_safe_json_parse
knack.util.CLIError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 233, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 660, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 723, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 716, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\six.py", line 703, in reraise
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 694, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 331, in call
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/init.py", line 811, in default_command_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/util/custom.py", line 17, in rest_call
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/util.py", line 810, in send_raw_request
ValueError: not enough values to unpack (expected 2, got 1)
To open an issue, please run: 'az feedback'
[2]
Bad Request({"error":{"code":"BadRequest","message":"Unable to read JSON request payload. Please ensure Content-Type header is set and payload is of valid JSON format.","innerError":{"date":"2021-06-19T12:49:52","request-id":"13fe58d2-ef15-4a57-8f95-4f30dcece5cc","client-request-id":"13fe58d2-ef15-4a57-8f95-4f30dcece5cc"}}})
Not sure what caused the issue, if you want to set preAuthorizedApplications with azure cli, you could use the az rest to call the Microsoft Graph - Update application directly.
Sample:
az rest --method patch --uri "https://graph.microsoft.com/beta/applications/<object-id>" --headers '{"Content-Type":"application/json"}' --body '{"api":{"preAuthorizedApplications":[{"appId":"a37c1158-xxxxx94f2b","permissionIds":["5479xxxxx522869e718f0"]}]}}'
Note: You need to test the sample in the bash instead of the powershell, there are quoting issues in different terminals, if you want to run it in the powershell, you need to change the format of the headers and body, see https://github.com/Azure/azure-cli/blob/dev/doc/use_cli_effectively.md#quoting-issues
I test it directly in the Bash of Azure Cloud Shell, it works fine:
Check in the portal:

Key error while updating all modules in oddo10

using command - python3 odoo-bin --addons=addons,/opt/git_addons/project_abcd -u all &
when i tryied to update modules on server, I am geeting Internal server Error and Error log says:
Traceback (most recent call last):
File "/opt/odoo/odoo/modules/registry.py", line 83, in new
odoo.modules.load_modules(registry._db, force_demo, status, update_module)
File "/opt/odoo/odoo/modules/loading.py", line 373, in load_modules
force, status, report, loaded_modules, update_module, models_to_check)
File "/opt/odoo/odoo/modules/loading.py", line 270, in load_marked_modules
perform_checks=perform_checks, models_to_check=models_to_check
File "/opt/odoo/odoo/modules/loading.py", line 153, in load_module_graph
registry.setup_models(cr, partial=True)
File "/opt/odoo/odoo/modules/registry.py", line 300, in setup_models
model._setup_fields(partial)
File "/opt/odoo/odoo/models.py", line 2853, in _setup_fields
field.setup_full(self)
File "/opt/odoo/odoo/fields.py", line 505, in setup_full
self._setup_regular_full(model)
File "/opt/odoo/odoo/fields.py", line 2178, in _setup_regular_full
invf = comodel._fields[self.inverse_name]
KeyError: 'standard_id'
Please help to resolve this error.
Please find the standard_id field in all modules.
Upgrade module which have standard_id field.
If you update -u all with this command line interface then it'll update all your base module first and then you custom modules.
So it might be the reason where your module consist this field and odoo registry can't find it.
With this information it's impossible to say reason for this. One of your modules is trying to refer to a field named stadard_id which doesn't exist.
Try to update your modules one by one and see which one gives this error. Then it's easier to troubleshoot it further.
There may be some dependencies missing from __manifest__.py file.

Registering and downloading a fastText .bin model fails with Azure Machine Learning Service

I have a simple RegisterModel.py script that uses the Azure ML Service SDK to register a fastText .bin model. This completes successfully and I can see the model in the Azure Portal UI (I cannot see what model files are in it). I then want to download the model (DownloadModel.py) and use it (for testing purposes), however it throws an error on the model.download method (tarfile.ReadError: file could not be opened successfully) and makes a 0 byte rjtestmodel8.tar.gz file.
I then use the Azure Portal and Add Model and select the same bin model file and it uploads fine. Downloading it with the download.py script below works fine, so I am assuming something is not correct with the Register script.
Here are the 2 scripts and the stacktrace - let me know if you can see anything wrong:
RegisterModel.py
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model.register(workspace=ws,
model_name='rjSDKmodel10',
model_path='riskModel.bin')
DownloadModel.py
# Works when downloading the UI Uploaded .bin file, but not the SDK registered .bin file
import os
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model(workspace=ws, name='rjSDKmodel10')
model.download(target_dir=os.getcwd(), exist_ok=True)
Stacktrace
Traceback (most recent call last):
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "...\.conda\envs\DoC\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "...\.conda\envs\DoC\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "...\.conda\envs\DoC\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "...\\DownloadModel.py", line 21, in <module>
model.download(target_dir=os.getcwd(), exist_ok=True)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 712, in download
file_paths = self._download_model_files(sas_to_relative_download_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 658, in _download_model_files
file_paths = self._handle_packed_model_file(tar_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 670, in _handle_packed_model_file
with tarfile.open(tar_path) as tar:
File "...\.conda\envs\DoC\lib\tarfile.py", line 1578, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
Environment
riskModel.bin is 6 megs
AMLS 1.0.60
Python 3.7
Working locally with Visual Code
The Azure Machine Learning service SDK has a bug with how it interacts with Azure Storage, which causes it to upload corrupted files if it has to retry uploading.
A couple workarounds:
The bug was introduced in 1.0.60 release. If you downgrade to AzureML-SDK 1.0.55, the code should fail when there are issue uploading instead of silently corrupting data.
It's possible that the retry is being triggered by the low timeout values that the AzureML-SDK defaults to. You could investigate changing the timeout in site-packages/azureml/_restclient/artifacts_client.py
This bug should be fixed in the next release of the AzureML-SDK.

az dls fs upload to ADLS folder throws raise FileExistsError(rpath) error

I'm trying to upload some files to a particular folder in ADLS. Below is the az upload script am using to upload the files.
az dls fs upload --account $adls_account --source-path $src_dir --destination-path $dest_dir --thread-count $thread_count --debug
The destination folder already exists in the ADLS and am trying to add some more files to it. But when running this script, it throws the error:
Traceback (most recent call last):
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/main.py", line 36, in main
cmd_result = APPLICATION.execute(args)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/application.py", line 211, in execute
result = expanded_arg.func(params)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 346, in __call__
return self.handler(*args, **kwargs)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 545, in _execute_command
reraise(*sys.exc_info())
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 522, in _execute_command
result = op(client, **kwargs) if client else op(**kwargs)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/command_modules/dls/custom.py", line 174, in upload_to_adls
ADLUploader(client, destination_path, source_path, thread_count, overwrite=overwrite)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/datalake/store/multithread.py", line 347, in __init__
raise FileExistsError(rpath)
FileExistsError: /folder1/folder2/folder3/
am using
$ az --version
azure-cli (2.0.9)
Can some please help me how to resolve this error? Basically i want to turn off the overwrite feature while uploading to ADLS.
Thanks,
Arjun
The error returned includes a reference to “FileExistsError: /folder1/folder2/folder3/” . which indicates that that folder already exists.
According to the command reference, since you are not using the –overwrite parameter, the operation will fail if the destination already exists.
I can’t see what value you set for the $src_dir, but if this is set to “/folder1/folder2/folder3”, then the error would result.

Resources