az dls fs upload to ADLS folder throws raise FileExistsError(rpath) error - azure

I'm trying to upload some files to a particular folder in ADLS. Below is the az upload script am using to upload the files.
az dls fs upload --account $adls_account --source-path $src_dir --destination-path $dest_dir --thread-count $thread_count --debug
The destination folder already exists in the ADLS and am trying to add some more files to it. But when running this script, it throws the error:
Traceback (most recent call last):
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/main.py", line 36, in main
cmd_result = APPLICATION.execute(args)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/application.py", line 211, in execute
result = expanded_arg.func(params)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 346, in __call__
return self.handler(*args, **kwargs)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 545, in _execute_command
reraise(*sys.exc_info())
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 522, in _execute_command
result = op(client, **kwargs) if client else op(**kwargs)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/cli/command_modules/dls/custom.py", line 174, in upload_to_adls
ADLUploader(client, destination_path, source_path, thread_count, overwrite=overwrite)
File "/mnt/resource/apps/azure-cli/lib/python2.7/site-packages/azure/datalake/store/multithread.py", line 347, in __init__
raise FileExistsError(rpath)
FileExistsError: /folder1/folder2/folder3/
am using
$ az --version
azure-cli (2.0.9)
Can some please help me how to resolve this error? Basically i want to turn off the overwrite feature while uploading to ADLS.
Thanks,
Arjun

The error returned includes a reference to “FileExistsError: /folder1/folder2/folder3/” . which indicates that that folder already exists.
According to the command reference, since you are not using the –overwrite parameter, the operation will fail if the destination already exists.
I can’t see what value you set for the $src_dir, but if this is set to “/folder1/folder2/folder3”, then the error would result.

Related

BlobServiceClient object has no attribute `exists`

I have created an azure pipeline. Added a task to download the file from blobStorage.
But I am getting the following error:
ERROR: The command failed with an unexpected error. Here is the traceback:
ERROR: 'BlobServiceClient' object has no attribute 'exists'
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 658, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 721, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 713, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/init.py", line 385, in new_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/init.py", line 385, in new_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/_exception_handler.py", line 17, in file_related_exception_handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 692, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/init.py", line 328, in call
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/operations/blob.py", line 363, in storage_blob_download_batch
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 16, in collect_blobs
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 16, in
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/storage/util.py", line 31, in collect_blob_objects
AttributeError: 'BlobServiceClient' object has no attribute 'exists'
To open an issue, please run: 'az feedback'
##[error]PowerShell exited with code '1'.\
Inline script written in Task:
az storage blob download-batch --destination $(build.sourcesDirectory) --pattern $(jmxfile) -s $(jmeter-storagecontainer) --account-name $(az-storageaccount) --account-key '$(az-accountkey)' --connection-string '$(az-connstring)'
I have verified all the variable values are correct & the jmxfile pattern is also correct.
Any idea, why getting this BlobServiceClient Object has no attribute 'exists' error?
The error "BlobServiceClient Object has no attribute 'exists'" usually occurs if you are using az cli latest version and executing az storage blob download-batch command.
To resolve the error, try using az storage blob download as a workaround.
Otherwise, try installing the previous version by uninstalling Azure cli latest version.
Make sure to delete all the dependencies of latest version while doing the above step.
Please note that --pattern parameter only supports four cases
Please note that there is a bug dealing with full blob name in latest CLI.
Please check the below GitHub blog which confirms the above issue:
Latest az cli fails to run download-batch command · Issue #21966 · Azure/azure-cli · GitHub

Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate

I installed django cookiecutter in Ubuntu 20.4
with postgresql when I try to make migrate to the database I get this error:
python manage.py migrate
Traceback (most recent call last): File "manage.py", line 10, in
execute_from_command_line(sys.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 381, in execute_from_command_line
utility.execute() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 323, in run_from_argv
self.execute(*args, **cmd_options) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 361, in execute
self.check() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 387, in check
all_issues = self._run_checks( File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py",
line 64, in _run_checks
issues = run_checks(tags=[Tags.database]) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/registry.py",
line 72, in run_checks
new_errors = check(app_configs=app_configs) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/database.py",
line 9, in check_database_backends
for conn in connections.all(): File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 216, in all
return [self[alias] for alias in self] File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 213, in iter
return iter(self.databases) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/utils/functional.py",
line 80, in get
res = instance.dict[self.name] = self.func(instance) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 147, in databases
self._databases = settings.DATABASES File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 79, in getattr
self._setup(name) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 66, in _setup
self._wrapped = Settings(settings_module) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 176, in init
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY
setting must not be empty.
I did the whole instructions in cookiecutter docs and createdb what is the wrong?
Python libraries are so many and to make things simple and to enable the code to be re-usable, modules call each other. First of all, don't be scared on seeing such a big error. It is only a traceback to the error, as one code calls the other, which calls the other. To debug any such problem, it's important to see the first and last .py file names. In your case, the nesting in the traceback is like this:
Traceback Flowchart
So, the key problem for you is The SECRET_KEY setting must not be empty.
I would recommend putting the secret key under the "config/.env" file, as mentioned here:
https://wemake-django-template.readthedocs.io/en/latest/pages/template/django.html#secret-settings-in-production
Initially, you should find the SECRET_KEY inside the setting.py file of the project folder. But it needs to be inside .env file in production/LIVE environment. And NEVER post the SECRET_KEY of live environments on github or even here, as it's a security risk.
Your main problem is very clear in the logs.
You need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.

Python script runs directly via command line but does not run via shell/bash script

I had a python script main.py it did something and to run it via crontab on a daily basis I created the following file (I think it's called bash script):
#!/bin/sh
source /Users/PathToProject/venv/bin/activate
python /Users/PathToProject/main.py
For some time now it ran daily without any problems.
Now I added a feature that saves a .CSV file containing some results to my google drive via PyDrive2 afterward in the main.py. When running this new script via command line it runs successfully without any errors - every time.
I assumed that the crontab would run as well, but now I get the Traceback below.
/Users/PathToProject/venv/lib/python3.8/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access mycreds.json: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Traceback (most recent call last):
File "/Users/PathToProject/venv/lib/python3.8/site-packages/oauth2client/clientsecrets.py", line 121, in _loadfile
with open(filename, 'r') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'client_secrets.json'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/PathToProject/venv/lib/python3.8/site-packages/pydrive2/auth.py", line 431, in LoadClientConfigFile
client_type, client_info = clientsecrets.loadfile(
File "/Users/PathToProject/venv/lib/python3.8/site-packages/oauth2client/clientsecrets.py", line 165, in loadfile
return _loadfile(filename)
File "/Users/PathToProject/venv/lib/python3.8/site-packages/oauth2client/clientsecrets.py", line 124, in _loadfile
raise InvalidClientSecretsError('Error opening file', exc.filename,
oauth2client.clientsecrets.InvalidClientSecretsError: ('Error opening file', 'client_secrets.json', 'No such file or directory', 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/PathToProject/main.py", line 5, in <module>
main()
File "/Users/PathToProject/version2.py", line 20, in main
PYD.download_file(data_file)
File "/Users/PathToProject/PyDrive_Modul.py", line 58, in download_file
file_ID = get_ID_of_title(filename)
File "/Users/PathToProject/PyDrive_Modul.py", line 47, in get_ID_of_title
drive = google_drive_auth()
File "/Users/PathToProject/PyDrive_Modul.py", line 11, in google_drive_auth
gauth.LocalWebserverAuth()
File "/Users/PathToProject/venv/lib/python3.8/site-packages/pydrive2/auth.py", line 123, in _decorated
self.GetFlow()
File "/Users/PathToProject/venv/lib/python3.8/site-packages/pydrive2/auth.py", line 507, in GetFlow
self.LoadClientConfig()
File "/Users/PathToProject/venv/lib/python3.8/site-packages/pydrive2/auth.py", line 411, in LoadClientConfig
self.LoadClientConfigFile()
File "/Users/PathToProject/venv/lib/python3.8/site-packages/pydrive2/auth.py", line 435, in LoadClientConfigFile
raise InvalidConfigError("Invalid client secrets file %s" % error)
pydrive2.settings.InvalidConfigError: Invalid client secrets file ('Error opening file', 'client_secrets.json', 'No such file or directory', 2)
If I edit the python script and skip the part of up/downloading to google drive it works fine.
Now I don't know why this error occurs and how I can solve this problem. The error message seems to be misleading because the client_secrets.json is in the directory and it works via the command line.
When you run via command line it picks path for json file and others.Cron could not find path.Be absolute in path, It will run smoothly. If absolute path not possible, try relative path with respect to CRON location path.

tensorflow.python.framework.errors_impl.NotFoundError adds $ sign

I'm trying to train a pre-trained object detection model to detect objects from my custom dataset. All is running on Google Colab. I prepared the images, created TFRecord files for train and test, installed Tensorflow Object detection API from source, and tested that it works.
First I suspected it is a PYTHONPATH problem, but even when adding the folder with the config to path, it does not work.
This is my command line (I invoke the script from research folder, as in documentation):
#From the tensorflow/models/research/ directory
PIPELINE_CONFIG_PATH='/content/gdrive/My\ Drive/AI/grape4/work/model/ssd_mobilenet_v2_oid_v4.config'
MODEL_DIR=os.path.join('/content/gdrive/My\ Drive/AI/grape4/work', 'model')
NUM_TRAIN_STEPS=5000
NUM_EVAL_STEPS=1000
!python object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \
--num_train_steps=${NUM_TRAIN_STEPS} \
--num_eval_steps=${NUM_EVAL_STEPS} \
--alsologtostderr
Below is the error I'm getting. I confirm the mentioned file exists in the folder. But what is strange to me is the added $ sign (dollar sign) in the trace:
Traceback (most recent call last):
File "object_detection/model_main.py", line 109, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "object_detection/model_main.py", line 71, in main
FLAGS.sample_1_of_n_eval_on_train_examples))
File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/model_lib.py", line 605, in create_estimator_and_inputs
pipeline_config_path, config_override=config_override)
File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/utils/config_util.py", line 103, in get_configs_from_pipeline_file
proto_str = f.read()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 122, in read
self._preread_check()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: $/content/gdrive/My Drive/AI/grape4/work/model/ssd_mobilenet_v2_oid_v4.config; No such file or directory
Does anyone know where the problem might be?
I cannot exactly point what the problem is. But I think there might be 2 problems:
1. The path thats pointing to the .config file might be wrong.
2. The .config file might be corrupted.
Please take a look at this issues where problems #1 and #2 are being discussed clearly. I hope this helps!

Pyramid mongodb scaffold failing on Python 3 due to Paste

Environment:
Python 3.2.3 (using virtualenv)
Pyramid 1.4
pyramid_mongodb scaffold
After installing myproject using pyramid_mongodb scaffold I ran python setup.py test -q and it's failing with below errors.
running build_ext
Traceback (most recent call last):
File "setup.py", line 33, in <module>
""",
File "/usr/lib/python3.2/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.2/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.2/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 137, in run
self.with_project_on_sys_path(self.run_tests)
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 117, in with_project_on_sys_path
func()
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 146, in run_tests
testLoader = loader_class()
File "/usr/lib/python3.2/unittest/main.py", line 123, in __init__
self.parseArgs(argv)
File "/usr/lib/python3.2/unittest/main.py", line 191, in parseArgs
self.createTests()
File "/usr/lib/python3.2/unittest/main.py", line 198, in createTests
self.module)
File "/usr/lib/python3.2/unittest/loader.py", line 132, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/lib/python3.2/unittest/loader.py", line 132, in <listcomp>
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/lib/python3.2/unittest/loader.py", line 91, in loadTestsFromName
module = __import__('.'.join(parts_copy))
File "/root/App/Big3/Lime/lime/__init__.py", line 1, in <module>
from pyramid.config import Configurator
File "/root/App/Big3/lib/python3.2/site-packages/pyramid-1.4.1-py3.2.egg/pyramid/config /__init__.py", line 10, in <module>
from webob.exc import WSGIHTTPException as WebobWSGIHTTPException
File "/root/App/Big3/lib/python3.2/site-packages/WebOb-1.2.3-py3.2.egg/webob/exc.py", line 1115, in <module>
from paste import httpexceptions
File "/root/App/Big3/lib/python3.2/site-packages/Paste-1.7.5.1-py3.2.egg/paste /httpexceptions.py", line 634
except HTTPException, exc:
^
SyntaxError: invalid syntax
I understand the error, that Paste is not python3 compatible. I also know how to fix it but that would essentially mean porting Paste to python3 (which is something I don't want to do), so can anyone tell what I can do?
From the error stack I see that webob/exc.py is doing from paste import httpexceptions but when I checked the code I see that the import is under a try except block (without raising any error in except), so I even tried the test after removing paste from the lib but then when I run the test, I see that the setup.py is installing paste again
running test
Checking .pth file support in .
/root/App/Big3/bin/python -E -c pass
Searching for Paste>=1.7.1
I checked .pth files and removed reference to paste and then started re-installation of project but somehow it still sees paste as required
Installed /root/App/Big3/Myproject
Processing dependencies for Myproject==0.0
Searching for Paste>=1.7.1
Reading http://pypi.python.org/simple/Paste/
My setup.py file is same as this
Can someone tell me where is this paste dependency coming into my project.
I didn't intend to answer my own question but since I have made changes which are working for me, I thought I will share it here (assuming that there would be other folks wanting to have pyramid_mongodb scaffold work on python3)
Changes in development. ini
Removed
[pipeline:main]
pipeline =
egg:WebError#evalerror
{{project}}
Changed
[app:{{project}}] to [app:main]
Added (optional)
pyramid.includes =
pyramid_debugtoolbar
Changed server (from paste to waitress)
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
Changes in Setup.py
changed requires from
requires = ['pyramid', 'WebError', 'pymongo']
to
requires = ['pyramid', 'pyramid_debugtoolbar', 'pymongo', 'uwsgi', 'waitress']
It's important to remove webError
The application is now working...

Resources