Unable to access python packages installed in Azure ML - azure

I am trying to deploy a pre-trained ML model (saved as .h5 file) to Azure ML. I have created an AKS cluster and trying to deploy the model as shown below:
from azureml.core import Workspace
from azureml.core.model import Model
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AksWebservice, LocalWebservice
from azureml.core.compute import ComputeTarget
workspace = Workspace.from_config(path="config.json")
env = Environment.get(workspace, name='AzureML-TensorFlow-1.13-GPU')
# Installing packages present in my requirements file
with open('requirements.txt') as f:
dependencies = f.readlines()
dependencies = [x.strip() for x in dependencies if '# ' not in x]
dependencies.append("azureml-defaults>=1.0.45")
env.python.conda_dependencies = CondaDependencies.create(conda_packages=dependencies)
# Including the source folder so that all helper scripts are included in my deployment
inference_config = InferenceConfig(entry_script='app.py', environment=env, source_directory='./ProcessImage')
aks_target = ComputeTarget(workspace=workspace, name='sketch-ppt-vm')
# Deployment with suitable config
deployment_config = AksWebservice.deploy_configuration(cpu_cores=4, memory_gb=32)
model = Model(workspace, 'sketch-inference')
service = Model.deploy(workspace, "process-sketch-dev", [model], inference_config, deployment_config, deployment_target=aks_target, overwrite=True)
service.wait_for_deployment(show_output = True)
print(service.state)
My main entry script requires some additional helper scripts, which I include by mentioning the source folder in my inference config.
I was expecting that the helper scripts I add should be able to access the packages installed while setting up the environment during deployment, but I get ModuleNotFoundError.
Here is the error output, along with the a couple of environment variables I printed while executing entry script:
AZUREML_MODEL_DIR ---- azureml-models/sketch-inference/1
PYTHONPATH ---- /azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages:/var/azureml-server:
PATH ---- /azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/bin:/opt/miniconda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel/compilers_and_libraries/linux/mpi/bin64
Exception in worker process
Traceback (most recent call last):
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/azureml-envs/azureml_6dc005c11e151f8d9427c0c6091a1bb9/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/var/azureml-server/wsgi.py", line 1, in <module>
import create_app
File "/var/azureml-server/create_app.py", line 3, in <module>
from app import main
File "/var/azureml-server/app.py", line 32, in <module>
from aml_blueprint import AMLBlueprint
File "/var/azureml-server/aml_blueprint.py", line 25, in <module>
import main
File "/var/azureml-app/main.py", line 12, in <module>
driver_module_spec.loader.exec_module(driver_module)
File "/structure/azureml-app/ProcessImage/app.py", line 16, in <module>
from ProcessImage.samples.coco.inference import run as infer
File "/var/azureml-app/ProcessImage/samples/coco/inference.py", line 1, in <module>
import skimage.io
ModuleNotFoundError: No module named 'skimage'
The existing answers related to this aren't of much help. I believe there must be a simpler way to fix this, since AzureML specifically provides the feature to setup environment with pip/conda packages installed either by supplying requirements.txt file or individually.
What am I missing here? Kindly help.

So, after some trial and error, creating a fresh environment and then adding the packages solved the problem for me. I am still not clear on why this didn't work when I tried to use Environment.from_pip_requirements(). A detailed answer in this regard would be interesting to read.
My primary task was inference - object detection given an image, and we have our own model developed by our team. There are two types of imports I wanted to have:
1. Standard python packages (installed through pip)
This was solved by creating conda dependencies and add it to env object (Step 2)
2. Methods/vars from helper scripts (if you have pre/post processing to be done during model inference):
This was done by mentioning source_directory in InferenceConfig (step 3)
Here is my updated script which combines Environment creation, Inference and Deployment configs and using existing compute in the workspace (created through portal).
from azureml.core import Workspace
from azureml.core.model import Model
from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AksWebservice, LocalWebservice
from azureml.core.compute import ComputeTarget
# 1. Instantiate the workspace
workspace = Workspace.from_config(path="config.json")
# 2. Setup the environment
env = Environment('sketchenv')
with open('requirements.txt') as f: # Fetch all dependencies as a list
dependencies = f.readlines()
dependencies = [x.strip() for x in dependencies if '# ' not in x]
env.docker.base_image = DEFAULT_GPU_IMAGE
env.python.conda_dependencies = CondaDependencies.create(conda_packages=['numpy==1.17.4', 'Cython'], pip_packages=dependencies)
# 3. Inference Config
inference_config = InferenceConfig(entry_script='app.py', environment=env, source_directory='./ProcessImage')
# 4. Compute target (using existing cluster from the workspacke)
aks_target = ComputeTarget(workspace=workspace, name='sketch-ppt-vm')
# 5. Deployment config
deployment_config = AksWebservice.deploy_configuration(cpu_cores=6, memory_gb=100)
# 6. Model deployment
model = Model(workspace, 'sketch-inference') # Registered model (which contains model files/folders)
service = Model.deploy(workspace, "process-sketch-dev", [model], inference_config, deployment_config, deployment_target=aks_target, overwrite=True)
service.wait_for_deployment(show_output = True)
print(service.state)

Related

Azure Synapste Predict Model with Synapse ML predict

I follow the official tutotial from microsoft: https://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool
But when I execute:
#Bind model within Spark session
model = pcontext.bind_model(
return_types=RETURN_TYPES,
runtime=RUNTIME,
model_alias="Sales", #This alias will be used in PREDICT call to refer this model
model_uri=AML_MODEL_URI, #In case of AML, it will be AML_MODEL_URI
aml_workspace=ws #This is only for AML. In case of ADLS, this parameter can be removed
).register()
I´ve got:
NotADirectoryError: [Errno 20] Not a directory: '/mnt/var/hadoop/tmp/nm-local-dir/usercache/trusted-service-user/appcache/application_1648328086462_0002/spark-3d802a7e-15b7-4eb6-88c5-f0e01f8cdb35/userFiles-fbe23a43-67d3-4e65-a879-4a497e804b40/68603955220f5f8646700d809b71be9949011a2476a34965a3d5c0f3d14de79b.pkl/MLmodel'
Traceback (most recent call last):
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_context.py", line 47, in bind_model
udf = _create_udf(
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_udf.py", line 104, in _create_udf
model_runtime = runtime_gen._create_runtime()
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_runtime.py", line 103, in _create_runtime
if self._check_model_runtime_compatibility(model_runtime):
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_runtime.py", line 166, in _check_model_runtime_compatibility
model_wrapper = self._load()
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_runtime.py", line 78, in _load
return SynapsePredictModelCache._get_or_load(
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/core/_cache.py", line 172, in _get_or_load
model = load_model(runtime, model_uri, functions)
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/utils/_model_loader.py", line 257, in load_model
model = loader.load(model_uri, functions)
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/utils/_model_loader.py", line 122, in load
model = self._load(model_uri)
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/utils/_model_loader.py", line 215, in _load
return self._load_mlflow(model_uri)
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azure/synapse/ml/predict/utils/_model_loader.py", line 59, in _load_mlflow
model = mlflow.pyfunc.load_model(model_uri)
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/mlflow/pyfunc/init.py", line 640, in load_model
model_meta = Model.load(os.path.join(local_path, MLMODEL_FILE_NAME))
File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/mlflow/models/model.py", line 124, in load
with open(path) as f:
NotADirectoryError: [Errno 20] Not a directory: '/mnt/var/hadoop/tmp/nm-local-dir/usercache/trusted-service-user/appcache/application_1648328086462_0002/spark-3d802a7e-15b7-4eb6-88c5-f0e01f8cdb35/userFiles-fbe23a43-67d3-4e65-a879-4a497e804b40/68603955220f5f8646700d809b71be9949011a2476a34965a3d5c0f3d14de79b.pkl/MLmodel'
How can I fix that error ?
(UPDATE:29/3/2022): You will experiencing this error message if you model does not contains all the required files in the ML model.
As per the repro, I had created two ML models named:
sklearn_regression_model: Which contains only sklearn_regression_model.pkl file.
When I predict for MLFLOW packaged model named sklearn_regression_model, getting same error as shown above:
linear_regression: Which contains the below files:
When I predict for MLFLOW packaged model named linear_regression, it works as excepted.
It should be AML_MODEL_URI = "" #In URI ":x" => Rossman_Sales:2
Before running this script, update it with the URI for ADLS Gen2 data file along with model output return data type and ADLS/AML URI for the model file.
#Set model URI
#Set AML URI, if trained model is registered in AML
AML_MODEL_URI = "<aml model uri>" #In URI ":x" signifies model version in AML. You can choose which model version you want to run. If ":x" is not provided then by default latest version will be picked.
#Set ADLS URI, if trained model is uploaded in ADLS
ADLS_MODEL_URI = "abfss://<filesystemname>#<account name>.dfs.core.windows.net/<model mlflow folder path>"
Model URI from AML Workspace:
DATA_FILE = "abfss://data#cheprasynapse.dfs.core.windows.net/AML/LengthOfStay_cooked_small.csv"
AML_MODEL_URI_SKLEARN = "aml://mlflow_sklearn:1" #Here ":1" signifies model version in AML. We can choose which version we want to run. If ":1" is not provided then by default latest version will be picked
RETURN_TYPES = "INT"
RUNTIME = "mlflow"
Model URI uploaded to ADLS Gen2:
DATA_FILE = "abfss://data#cheprasynapse.dfs.core.windows.net/AML/LengthOfStay_cooked_small.csv"
AML_MODEL_URI_SKLEARN = "abfss://data#cheprasynapse.dfs.core.windows.net/linear_regression/linear_regression" #Here ":1" signifies model version in AML. We can choose which version we want to run. If ":1" is not provided then by default latest version will be picked
RETURN_TYPES = "INT"
RUNTIME = "mlflow"

Multiple bases have instance lay-out conflict in Robot Framework

I'm creating a new testing frameworks, I started to implement my own functions in *.py files, but when I try to run test, I've got following stack:
(venv) PLAMWS0024:OAT user$ robot -v CONFIG_FILE:"/variables-config.robot" ./catalog/tests/test1.robot
Traceback (most recent call last):
File "/Users/user/PycharmProjects/OAT/venv/bin/robot", line 5, in <module>
from robot.run import run_cli
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/__init__.py", line 44, in <module>
from robot.rebot import rebot, rebot_cli
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/rebot.py", line 45, in <module>
from robot.run import RobotFramework
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/run.py", line 44, in <module>
from robot.running.builder import TestSuiteBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/__init__.py", line 98, in <module>
from .builder import TestSuiteBuilder, ResourceFileBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/builder/__init__.py", line 16, in <module>
from .builders import TestSuiteBuilder, ResourceFileBuilder
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/running/builder/builders.py", line 20, in <module>
from robot.parsing import SuiteStructureBuilder, SuiteStructureVisitor
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/__init__.py", line 380, in <module>
from .model import ModelTransformer, ModelVisitor
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/model/__init__.py", line 18, in <module>
from .statements import Statement
File "/Users/user/PycharmProjects/OAT/venv/lib/python3.8/site-packages/robot/parsing/model/statements.py", line 453, in <module>
class Error(Statement, Exception):
TypeError: multiple bases have instance lay-out conflict
I suspect it's because in one of my files I'm trying to get variables from Robot Framework built in functionalities.
and I'm thinking it's because I'm trying to use protected methods, but I am not sure.
I found issue TypeError: multiple bases have instance lay-out conflict and it shows that there might be a mismatch in naming convention (or am I wrong?), but my project is a bit small, so the only option is that Robot can't see the function itself.
What can I miss?
Some code:
Test itself:
*** Settings ***
Documentation TO BE CHANGED
... SET IT TO CORRECT DESCRIPTION
Library ${EXECDIR}/file.py
Library String
*** Test Cases ***
User can do stuff
foo bar
from datetime import datetime
from robot.api import logger
from robot.libraries.BuiltIn import _Variables
from robot.parsing.model.statements import Error
import json
import datetime
from catalog.resources.utils.clipboardContext import get_value_from_clipboard
Vars = _Variables()
def foo_bar(params):
# Get all variables
country = get_value_from_clipboard('${COUNTRY}')
address = get_value_from_clipboard('${ADDRESS}')
city = get_value_from_clipboard('${CITY}')
postcode = get_value_from_clipboard('${POSTALCODE}')
And calling Vars:
from robot.libraries.BuiltIn import _Variables
from robot.parsing.model.statements import Error
Vars = _Variables()
def get_value_from_clipboard(name):
"""
Returns value saved inside variables passed in Robot Framework
:param name: name of the variable, needs to have ${} part
as example: ${var} passed in config file
:return: value itself, passed as string
"""
try:
return Vars.get_variable_value(name)
except Error as e:
raise Error('Missing parameter in the clipboard, stack:' + str(e))
What fixed issue:
uninstall all requirements from requirements.txt file and install all one-by-one.
Additional steps I tried:
comment out all files one-by-one and run only robot command - failed, got same errors
cleaned vnenv as described here: How to reset virtualenv and pip? (failed)
check out if any variable has same naming as described in python3.8/site-packages/robot/parsing/model/statements.py - none
So looks like there was some clash in installing requirements by PyCharm IDE

Loading a saved Tensorflow model from its .meta file

I am trying to load a tensorflow meta graph from a saved checkpoint using Tensorflow version 1.15 to convert it to a SavedModel for tensorflow serving. It is a Speech Recognition Model with Local attention and unidirectional LSTM implemented using the Returnn Toolkit with Tensorflow Backend. I am using the following code.
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
import sys
if len(sys.argv)!=2:
print("Usage:" + sys.argv[0] + "save_dir")
exit(1)
export_dir=sys.argv[1]
builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
sigs={}
with tf.Session(graph=tf.Graph()) as sess:
new_saver=tf.train.import_meta_graph("./serv_test/model.238.meta")
new_saver.restore(sess, tf.train.latest_checkpoint("./serv_test"))
graph=tf.get_default_graph()
input_audio=graph.get_tensor_by_name('inference/default/wav:0')
output_hyps=graph.get_tensor_by_name('inference/default/Reshape_7:0')
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = tf.saved_model.signature_def_utils.predict_signature_def({"in":input_audio},{"out":output_hyps})
builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING], signature_def_map=sigs,)
builder.save()
But I am getting the following error in the import_meta_graph line:
Traceback (most recent call last):
File "xport.py", line 16, in <module>
new_saver=tf.train.import_meta_graph("./serv_test/model.238.meta")
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1453, in import_meta_graph
**kwargs)[0]
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1477, in _import_meta_graph_with_return_elements
**kwargs))
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/meta_graph.py", line 809, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "/home/ubuntu/tf1.15/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered
'NativeLstm2' in binary running on ip-10-1-21-241. Make sure the Op and Kernel
are registered in the binary running in this process. Note that if you are loading a
saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler`
should be done before importing the graph, as contrib ops are lazily registered when
the module is first accessed.
Is there any way to get around this error? Is it because of the custom built layers used in Returnn? Is there any way to make a Returnn Model tensorflow servable?
Thanks.
You should remove the graph=tf.Graph(), otherwise your import_meta_graph will import it into the wrong graph.
Just see some official TF examples how to use import_meta_graph.

ImportError: cannot import name '_ni_support' -- Using cx-freeze with scipy

I have used cx-freeze to make an executable file from my python 3 script. The issue is that apparently cx-freeze is having a hard time importing scipy scripts. I had to resolve other issues previously, e.g. adding the tcl library files manually. Anyway, my setup files is below:
from cx_Freeze import setup, Executable
import os
os.environ['TCL_LIBRARY'] = "C:\\Users\\Gobryas\\AppData\\Local\\Continuum\\anaconda3\\tcl\\tcl8.6"
os.environ['TK_LIBRARY'] = "C:\\Users\\Gobryas\\AppData\\Local\\Continuum\\anaconda3\\tcl\\tk8.6"
additional_mods = ['numpy.core._methods', 'numpy.lib.format']
setup(name = "Curve Characterization" ,
options = {'build_exe': {'includes': additional_mods}},
version = "0.1" ,
description = "" ,
executables = [Executable("curvCharLite.py")])
This is the error that I get:
ImportError: cannot import name '_ni_support'
Full details
Traceback (most recent call last):
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\cx_Freeze\initscripts\__startup__.py", line 14, in run
module.run()
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\cx_Freeze\initscripts\Console.py", line 26, in run
exec(code, m.__dict__)
File "curvCharLite.py", line 21, in <module>
File "c:\users\Gobryas\documents\mo\project 1\ruptures\ruptures\__init__.py", line 8, in <module>
from .detection import (Binseg, BottomUp, Dynp, Omp, OmpK, Pelt, Window,
File "c:\users\Gobryas\documents\mo\project 1\ruptures\ruptures\detection\__init__.py", line 51, in <module>
from .window import Window
File "c:\users\Gobryas\documents\mo\project 1\ruptures\ruptures\detection\window.py", line 112, in <module>
from scipy.signal import argrelmax
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\signal\__init__.py", line 311, in <module>
from ._savitzky_golay import savgol_coeffs, savgol_filter
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\signal\_savitzky_golay.py", line 6, in <module>
from scipy.ndimage import convolve1d
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\ndimage\__init__.py", line 161, in <module>
from .filters import *
File "C:\Users\Gobryas\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\ndimage\filters.py", line 35, in <module>
from . import _ni_support
ImportError: cannot import name '_ni_support'
PS 1: There is a similar issue asked here, but no helpful answer really.
PS 2: I'm using scipy 1.1.0, cx-freeze 5.1.1, and python 3.6.5
So, I think I found a solution for this problem inspired by the post by #fepzzz.
Looks like there are bad conflicts between cx-freeze and scipy which can be avoided as follows. I needed to modify the include_files build option as below:
import os
import scipy
includefiles_list=[]
scipy_path = os.path.dirname(scipy.__file__)
includefiles_list.append(scipy_path)
build_options = dict(packages=['matplotlib'], #this line solves an issue w/ matplotlib
include_files=includefiles_list, #this line is for scipy issue
includes=['matplotlib.backends.backend_qt5agg']) #this line solves another issue w/ matplotlib
Hope this helps others.

CX_Freeze and guiqwt works on windows not on linux

I use CX_Freeze to freeze one of my python programs. The build system works fine in windows. I can create a directory with executable and necessary dependencies that will run in any windows system.
When I try the same in Linux, the building part
python setup.py
works fine. But when I try to run the built executable, its gives the following error.
File "/usr/local/lib/python2.7/dist-packages/cx_Freeze/initscripts/Console.py", line 27, in <module>
exec code in m.__dict__
File "test.py", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/guidata/__init__.py", line 540, in <module>
import guidata.config
File "/usr/local/lib/python2.7/dist-packages/guidata/config.py", line 19, in <module>
add_image_module_path("guidata", "images")
File "/usr/local/lib/python2.7/dist-packages/guidata/configtools.py", line 100, in add_image_module_path
add_image_path(get_module_data_path(modname, relpath=relpath), subfolders)
File "/usr/local/lib/python2.7/dist-packages/guidata/configtools.py", line 86, in add_image_path
for fileobj in os.listdir(path):
OSError: [Errno 20] Not a directory: '/home/user/tmp/dist/library.zip/guidata/images'
It seems that guidata is trying to find images under non-existing library.zip/guidata/images directory. I made sure that I run same versions of guidata, cx_Freeze on both windows and linux. Any help to resolve the issue is appreciated.
Minimal example
import guidata
_app = guidata.qapplication() # not required if a QApplication has already been created
import guidata.dataset.datatypes as dt
import guidata.dataset.dataitems as di
class Processing(dt.DataSet):
"""Example"""
a = di.FloatItem("Parameter #1", default=2.3)
b = di.IntItem("Parameter #2", min=0, max=10, default=5)
type = di.ChoiceItem("Processing algorithm",
("type 1", "type 2", "type 3"))
param = Processing()
param.edit()
setup file
import sys
import os
"""Create a stand-alone executable"""
try:
import guidata
from guidata.disthelpers import Distribution
except ImportError:
raise ImportError, "This script requires guidata 1.4+"
def create_executable():
"""Build executable using ``guidata.disthelpers``"""
dist = Distribution()
dist.setup(name='Foo', version='0.1',
description='bar',
script="test.py", target_name='test.exe')
dist.add_modules('guidata', 'guiqwt')
# Building executable
dist.build('cx_Freeze')
if __name__ == '__main__':
create_executable()
OK. Here's the answer to my own question. After much digging up, I realized that it is a bug in guidata and/or python's os.path module. What happens is this. On guidata module, in configtools.py file, there is a function get_module_data_path, that checks whether the parent 'directory' of a path like /foo/bar/library.zip/yap to be a file.
import os.path as osp
...
...
datapath = get_module_path(modname)
parentdir = osp.join(datapath, osp.pardir)
if osp.isfile(parentdir):
# Parent directory is not a directory but the 'library.zip' file:
# this is either a py2exe or a cx_Freeze distribution
datapath = ...
Now the test
osp.isfile("/foo/bar/library.zip/yap/..")
returns True in windows but False in Linux. This breaks the code. Python documentation is unclear in stating whether this is a bug or the intended behavior.
At the moment I don't have a solution, but a hack. I changed the above code as:
import os.path as osp
...
...
datapath = get_module_path(modname)
parentdir = osp.join(datapath, osp.pardir)
parentdir2 = osp.split(datapath.rstrip(os.path.sep))[0]
if osp.isfile(parentdir) or osp.isfile(parentdir2):
# Parent directory is not a directory but the 'library.zip' file:
# this is either a py2exe or a cx_Freeze distribution
datapath = ...
and everything is fine.

Resources