Registering and downloading a fastText .bin model fails with Azure Machine Learning Service - python-3.x

I have a simple RegisterModel.py script that uses the Azure ML Service SDK to register a fastText .bin model. This completes successfully and I can see the model in the Azure Portal UI (I cannot see what model files are in it). I then want to download the model (DownloadModel.py) and use it (for testing purposes), however it throws an error on the model.download method (tarfile.ReadError: file could not be opened successfully) and makes a 0 byte rjtestmodel8.tar.gz file.
I then use the Azure Portal and Add Model and select the same bin model file and it uploads fine. Downloading it with the download.py script below works fine, so I am assuming something is not correct with the Register script.
Here are the 2 scripts and the stacktrace - let me know if you can see anything wrong:
RegisterModel.py
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model.register(workspace=ws,
model_name='rjSDKmodel10',
model_path='riskModel.bin')
DownloadModel.py
# Works when downloading the UI Uploaded .bin file, but not the SDK registered .bin file
import os
import azureml.core
from azureml.core import Workspace, Model
ws = Workspace.from_config()
model = Model(workspace=ws, name='rjSDKmodel10')
model.download(target_dir=os.getcwd(), exist_ok=True)
Stacktrace
Traceback (most recent call last):
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "...\.vscode\extensions\ms-python.python-2019.9.34474\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "...\.conda\envs\DoC\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "...\.conda\envs\DoC\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "...\.conda\envs\DoC\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "...\\DownloadModel.py", line 21, in <module>
model.download(target_dir=os.getcwd(), exist_ok=True)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 712, in download
file_paths = self._download_model_files(sas_to_relative_download_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 658, in _download_model_files
file_paths = self._handle_packed_model_file(tar_path, target_dir, exist_ok)
File "...\.conda\envs\DoC\lib\site-packages\azureml\core\model.py", line 670, in _handle_packed_model_file
with tarfile.open(tar_path) as tar:
File "...\.conda\envs\DoC\lib\tarfile.py", line 1578, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
Environment
riskModel.bin is 6 megs
AMLS 1.0.60
Python 3.7
Working locally with Visual Code

The Azure Machine Learning service SDK has a bug with how it interacts with Azure Storage, which causes it to upload corrupted files if it has to retry uploading.
A couple workarounds:
The bug was introduced in 1.0.60 release. If you downgrade to AzureML-SDK 1.0.55, the code should fail when there are issue uploading instead of silently corrupting data.
It's possible that the retry is being triggered by the low timeout values that the AzureML-SDK defaults to. You could investigate changing the timeout in site-packages/azureml/_restclient/artifacts_client.py
This bug should be fixed in the next release of the AzureML-SDK.

Related

Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate

I installed django cookiecutter in Ubuntu 20.4
with postgresql when I try to make migrate to the database I get this error:
python manage.py migrate
Traceback (most recent call last): File "manage.py", line 10, in
execute_from_command_line(sys.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 381, in execute_from_command_line
utility.execute() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/init.py",
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 323, in run_from_argv
self.execute(*args, **cmd_options) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 361, in execute
self.check() File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/base.py",
line 387, in check
all_issues = self._run_checks( File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py",
line 64, in _run_checks
issues = run_checks(tags=[Tags.database]) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/registry.py",
line 72, in run_checks
new_errors = check(app_configs=app_configs) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/core/checks/database.py",
line 9, in check_database_backends
for conn in connections.all(): File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 216, in all
return [self[alias] for alias in self] File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 213, in iter
return iter(self.databases) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/utils/functional.py",
line 80, in get
res = instance.dict[self.name] = self.func(instance) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/db/utils.py",
line 147, in databases
self._databases = settings.DATABASES File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 79, in getattr
self._setup(name) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 66, in _setup
self._wrapped = Settings(settings_module) File "/home/mais/PycharmProjects/django_cookiecutter_task/venv/lib/python3.8/site-packages/django/conf/init.py",
line 176, in init
raise ImproperlyConfigured("The SECRET_KEY setting must not be empty.") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY
setting must not be empty.
I did the whole instructions in cookiecutter docs and createdb what is the wrong?
Python libraries are so many and to make things simple and to enable the code to be re-usable, modules call each other. First of all, don't be scared on seeing such a big error. It is only a traceback to the error, as one code calls the other, which calls the other. To debug any such problem, it's important to see the first and last .py file names. In your case, the nesting in the traceback is like this:
Traceback Flowchart
So, the key problem for you is The SECRET_KEY setting must not be empty.
I would recommend putting the secret key under the "config/.env" file, as mentioned here:
https://wemake-django-template.readthedocs.io/en/latest/pages/template/django.html#secret-settings-in-production
Initially, you should find the SECRET_KEY inside the setting.py file of the project folder. But it needs to be inside .env file in production/LIVE environment. And NEVER post the SECRET_KEY of live environments on github or even here, as it's a security risk.
Your main problem is very clear in the logs.
You need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.

tensorflow.python.framework.errors_impl.NotFoundError adds $ sign

I'm trying to train a pre-trained object detection model to detect objects from my custom dataset. All is running on Google Colab. I prepared the images, created TFRecord files for train and test, installed Tensorflow Object detection API from source, and tested that it works.
First I suspected it is a PYTHONPATH problem, but even when adding the folder with the config to path, it does not work.
This is my command line (I invoke the script from research folder, as in documentation):
#From the tensorflow/models/research/ directory
PIPELINE_CONFIG_PATH='/content/gdrive/My\ Drive/AI/grape4/work/model/ssd_mobilenet_v2_oid_v4.config'
MODEL_DIR=os.path.join('/content/gdrive/My\ Drive/AI/grape4/work', 'model')
NUM_TRAIN_STEPS=5000
NUM_EVAL_STEPS=1000
!python object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \
--num_train_steps=${NUM_TRAIN_STEPS} \
--num_eval_steps=${NUM_EVAL_STEPS} \
--alsologtostderr
Below is the error I'm getting. I confirm the mentioned file exists in the folder. But what is strange to me is the added $ sign (dollar sign) in the trace:
Traceback (most recent call last):
File "object_detection/model_main.py", line 109, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "object_detection/model_main.py", line 71, in main
FLAGS.sample_1_of_n_eval_on_train_examples))
File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/model_lib.py", line 605, in create_estimator_and_inputs
pipeline_config_path, config_override=config_override)
File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/utils/config_util.py", line 103, in get_configs_from_pipeline_file
proto_str = f.read()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 122, in read
self._preread_check()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: $/content/gdrive/My Drive/AI/grape4/work/model/ssd_mobilenet_v2_oid_v4.config; No such file or directory
Does anyone know where the problem might be?
I cannot exactly point what the problem is. But I think there might be 2 problems:
1. The path thats pointing to the .config file might be wrong.
2. The .config file might be corrupted.
Please take a look at this issues where problems #1 and #2 are being discussed clearly. I hope this helps!

How to use paralledots api in app engine?

I want to check text similarity using paralleldots api in app engine, but when setting the api key in app engine using.
paralleldots.set_api_key("XXXXXXXXXXXXXXXXXXXXXXXXXXX")
App engine giving Error:
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
INFO 2019-03-17 10:43:59,852 module.py:835] default: "GET / HTTP/1.1" 500 -
INFO 2019-03-17 10:46:47,548 client.py:777] Refreshing access_token
ERROR 2019-03-17 10:46:50,931 wsgi.py:263]
Traceback (most recent call last):
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/ulti72/Desktop/koda/main.py", line 26, in <module>
paralleldots.set_api_key("7PR8iwo42DGFB8qpLjpUGJPqEQHU322lqTDkgaMrX7I")
File "/home/ulti72/Desktop/koda/lib/paralleldots/config.py", line 13, in set_api_key
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
The paralleldots api, seems to want to save a settings.cfg file to the local filesystem in response to that call. Which is not allowed in the 1st generation standard environment and only allowed for files in the /tmp filesystem in the 2nd generation.
The local development server was designed for the 1st generation standard env and enforces the restriction with that error. It has limited support for the 2nd generation env, see Python 3.7 Local Development Server Options for new app engine apps.
Things to try:
check if specifying the location of the settings.cfg is supported and if so make it reside under /tmp. Maybe the local development server allows that or you switch to some other local development method than the development server.
check if saving the settings using an already open file handler is supported and, if so, use one obtained from Cloud Storage client library, something along these lines: How to zip or tar a static folder without writing anything to the filesystem in python?
check if set_api_key() supports some other method of persisting the API key than saving the settings to a file
check if it's possible to specify the API key for every subsequent call so you don't have to persist it using set_api_key() (maybe using a common wrapper function for convenience)

How to use sub-projects with targets with same names in Apportable?

Currently, Apportable cannot build a project with targets with same names. This happens because of sub-projects. I have several sub-projects which all contains target named iOS Static Library and OS X Static Library. And their PRODUCT_NAME is overridden to prevent .a file name duplication.
Anyway, when I see build log, Apportable seem to use target name as a kind of a global identifier. And crashes while building.
How can I use targets with same names over multiple sub-projects?
Here's full build log.
Erionirr:Test9 Eonil$ apportable uninstall; rm -rf ~/.apportable/SDK/Build/android-armeabi-debug; rm -rf *.approj; apportable debug
Building to /Users/Eonil/.apportable/SDK/Build/android-armeabi-debug
Loading configuration.
Finished parsing configuration.
Loading configuration.
Finished parsing configuration.
Loading configuration.
Finished parsing configuration.
scons: Building targets ...
scons: *** [Build/android-armeabi-debug/Test9/Test9-debug.apk_debug] Source `Build/android-armeabi-debug/Test9/Test9-debug.apk' not found, needed by target `Build/android-armeabi-debug/Test9/Test9-debug.apk_debug'.
scons: building terminated because of errors.
Building to /Users/Eonil/.apportable/SDK/Build/android-armeabi-debug
Updating configuration parameters... Building Xcode project /Users/Eonil/Desktop/Apportable Knowledge Base and Bug Reporting/Running Test/Test9/Test9
Scanning build configuration for target Test9
Merging configuration parameters.
It looks like you're compiling this app for the first time.
Test9.approj/configuration.json will be created for you.
A few quick questions and you'll be on your way:
If the app is using OpenGL ES, does it use ES1 or ES2? (Cocos2D 1.X uses ES1, 2.X uses ES2)
[1/2] 2
Should the app initially launch in landscape or portrait orientation? (default: landscape)
[L/p] p
Loading configuration.
Finished parsing configuration.
Merging configuration parameters.
Loading configuration.
Finished parsing configuration.
Merging configuration parameters.
Loading configuration.
Finished parsing configuration.
Traceback (most recent call last):
File "/Users/Eonil/.apportable/SDK/bin/apportable", line 701, in <module>
run(env)
File "/Users/Eonil/.apportable/SDK/bin/apportable", line 677, in run
results = actions[args.action](env)
File "/Users/Eonil/.apportable/SDK/bin/apportable", line 95, in DebugAction
return env.DebugApp(site_init.BuildApplication(env, env['BUILD_TARGET']))
File "/Users/Eonil/.apportable/SDK/site_scons/site_init.py", line 351, in BuildApplication
return build.App(env, app_sconscript)
File "/Users/Eonil/.apportable/SDK/site_scons/build/__init__.py", line 619, in App
results = env.BuildApp(sources=sources, header_paths=headers, defines=defines, flags=flags, config=configs, deps=deps, libs=libs, java_libs=java_libs, assets=assets, pch=pchs, modules=modules, java_sources=java_sources, java_sourcepaths=java_sourcepaths, java_res_dirs=java_res_dirs)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Environment.py", line 223, in __call__
return self.method(*nargs, **kwargs)
File "/Users/Eonil/.apportable/SDK/site_scons/site_init.py", line 915, in BuildApp
build.Module(env, module["build_cwd"], module)
File "/Users/Eonil/.apportable/SDK/site_scons/build/__init__.py", line 681, in Module
env.BuildModule(target["target"], sources=sources, header_paths=headers, defines=defines, flags=flags, deps=deps, libs=libs, java_libs=java_libs, assets=assets, pch=pchs, modules=modules, java_sources=java_sources, java_sourcepaths=java_sourcepaths, java_res_dirs=java_res_dirs)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Environment.py", line 223, in __call__
return self.method(*nargs, **kwargs)
File "/Users/Eonil/.apportable/SDK/site_scons/site_init.py", line 1014, in BuildModule
BuildLibrary(env, name, sources=sources, header_paths=header_paths, static=True, defines=defines, flags=flags, deps=deps, libs=libs, pch=pch, app=True)
File "/Users/Eonil/.apportable/SDK/site_scons/site_init.py", line 755, in BuildLibrary
lib = building_env.StaticLibrary(name, objects)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Environment.py", line 259, in __call__
return MethodWrapper.__call__(self, target, source, *args, **kw)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Environment.py", line 223, in __call__
return self.method(*nargs, **kwargs)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Builder.py", line 632, in __call__
return self._execute(env, target, source, OverrideWarner(kw), ekw)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Builder.py", line 556, in _execute
_node_errors(self, env, tlist, slist)
File "/Users/Eonil/.apportable/SDK/lib/scons/engine/SCons/Builder.py", line 315, in _node_errors
raise UserError(msg)
SCons.Errors.UserError: Multiple ways to build the same target were specified for: Build/android-armeabi-debug/Eonil.Test9/iOS Static Library/libiOS Static Library.a (from ['Build/android-armeabi-debug/Eonil.Test9/Users/Eonil/Desktop/Apportable Knowledge Base and Bug Reporting/Running Test/Test9/Subproject1/Subproject1/Subproject1.m.o'] and from ['Build/android-armeabi-debug/Eonil.Test9/Users/Eonil/Desktop/Apportable Knowledge Base and Bug Reporting/Running Test/Test9/Subproject2/Subproject2/Subproject2.m.o'])
Erionirr:Test9 Eonil$
Currently this is not supported, we have been discussing internally on different methods to approach naming libraries that will avoid this issue but still keep linking somewhat streamlined. Keep tuned to our release notes page, when we get a solution for this we will definitely note it there with the user effective changes.
If it is not too inconvenient, giving your targets unique names like "iOS Static Library Cocos2D" should fix the issue until we can resolve this at the build system level.

Pyramid mongodb scaffold failing on Python 3 due to Paste

Environment:
Python 3.2.3 (using virtualenv)
Pyramid 1.4
pyramid_mongodb scaffold
After installing myproject using pyramid_mongodb scaffold I ran python setup.py test -q and it's failing with below errors.
running build_ext
Traceback (most recent call last):
File "setup.py", line 33, in <module>
""",
File "/usr/lib/python3.2/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.2/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.2/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 137, in run
self.with_project_on_sys_path(self.run_tests)
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 117, in with_project_on_sys_path
func()
File "/root/App/Big3/lib/python3.2/site-packages/distribute-0.6.24-py3.2.egg/setuptools /command/test.py", line 146, in run_tests
testLoader = loader_class()
File "/usr/lib/python3.2/unittest/main.py", line 123, in __init__
self.parseArgs(argv)
File "/usr/lib/python3.2/unittest/main.py", line 191, in parseArgs
self.createTests()
File "/usr/lib/python3.2/unittest/main.py", line 198, in createTests
self.module)
File "/usr/lib/python3.2/unittest/loader.py", line 132, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/lib/python3.2/unittest/loader.py", line 132, in <listcomp>
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/lib/python3.2/unittest/loader.py", line 91, in loadTestsFromName
module = __import__('.'.join(parts_copy))
File "/root/App/Big3/Lime/lime/__init__.py", line 1, in <module>
from pyramid.config import Configurator
File "/root/App/Big3/lib/python3.2/site-packages/pyramid-1.4.1-py3.2.egg/pyramid/config /__init__.py", line 10, in <module>
from webob.exc import WSGIHTTPException as WebobWSGIHTTPException
File "/root/App/Big3/lib/python3.2/site-packages/WebOb-1.2.3-py3.2.egg/webob/exc.py", line 1115, in <module>
from paste import httpexceptions
File "/root/App/Big3/lib/python3.2/site-packages/Paste-1.7.5.1-py3.2.egg/paste /httpexceptions.py", line 634
except HTTPException, exc:
^
SyntaxError: invalid syntax
I understand the error, that Paste is not python3 compatible. I also know how to fix it but that would essentially mean porting Paste to python3 (which is something I don't want to do), so can anyone tell what I can do?
From the error stack I see that webob/exc.py is doing from paste import httpexceptions but when I checked the code I see that the import is under a try except block (without raising any error in except), so I even tried the test after removing paste from the lib but then when I run the test, I see that the setup.py is installing paste again
running test
Checking .pth file support in .
/root/App/Big3/bin/python -E -c pass
Searching for Paste>=1.7.1
I checked .pth files and removed reference to paste and then started re-installation of project but somehow it still sees paste as required
Installed /root/App/Big3/Myproject
Processing dependencies for Myproject==0.0
Searching for Paste>=1.7.1
Reading http://pypi.python.org/simple/Paste/
My setup.py file is same as this
Can someone tell me where is this paste dependency coming into my project.
I didn't intend to answer my own question but since I have made changes which are working for me, I thought I will share it here (assuming that there would be other folks wanting to have pyramid_mongodb scaffold work on python3)
Changes in development. ini
Removed
[pipeline:main]
pipeline =
egg:WebError#evalerror
{{project}}
Changed
[app:{{project}}] to [app:main]
Added (optional)
pyramid.includes =
pyramid_debugtoolbar
Changed server (from paste to waitress)
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
Changes in Setup.py
changed requires from
requires = ['pyramid', 'WebError', 'pymongo']
to
requires = ['pyramid', 'pyramid_debugtoolbar', 'pymongo', 'uwsgi', 'waitress']
It's important to remove webError
The application is now working...

Resources