Import of an attribute of a python module fails - python-3.x

I have the following directory structure:
http://localhost:8888/notebooks/translation.ipynb
http://localhost:8888/edit/Fill_temp/prepare_test_data.py
In
prepare_test_data.py
I have a function:
def to_cap (EXP_FILE, SAMPLES_FILE: str= EXP_FILE + '.cap', cap_rate=0, by_token=False):
In the notebook
translation.ipynb
I do these imports:
%load_ext autoreload
%autoreload 2
import Fill_temp
import Fill_temp.prepare_test_data
then I run
Fill_temp.prepare_test_data.to_cap("en12.json.pres", "en12.cap.0")
and I get
AttributeError: module 'Fill_temp.prepare_test_data' has no attribute 'to_cap'
How come?
I explicitly imported both the Fill_temp package and the prepare_test_data module.
Do I need to import even the lowest level functions that are defined in the module?
EDIT:
I tried to import the low level function explicitly:
%load_ext autoreload
%autoreload 2
import Fill_temp
import Fill_temp.prepare_test_data
import Fill_temp.prepare_test_data.to_cap
but I get:
ModuleNotFoundError: No module named
'Fill_temp.prepare_test_data.to_cap';
'Fill_temp.prepare_test_data' is not a package
So what shall I do?

This is a bit bizarre. Basically, it turned out that there was a syntax error in that low level function.
But instead of saying it jupyter was saying that it doesn't see that function. Which is a really counter-intuitive error message.

Related

ModuleNotFoundError: No module named 'keras.layers.preprocessing'

After writing this -
VERIFICATION_SCRIPT = os.path.join(paths['APIMODEL_PATH'], 'research', 'object_detection', 'builders', 'model_builder_tf2_test.py')
!python {VERIFICATION_SCRIPT}
I am getting this error-
from keras.layers.preprocessing import image_preprocessing as image_ops
ModuleNotFoundError: No module named 'keras.layers.preprocessing'
This error is because there is no api named keras.layers.preprocessing. The correct name of this api is tensorflow.keras.preprocessing and you can import image from this api not image_preprocessing
Try using:
from tensorflow.keras.preprocessing import image as image_ops
in place of (incorrect way)
from keras.layers.preprocessing import image_preprocessing as image_ops
Please check this link for more details.

OpenMDAO and NSGA II

I found some interesting code in openmdao\drivers\tests\test_pyoptsparse_driver.py that seems to reference NSGA-II. I noticed that this is not implemented when I tried running the test code.
import sys
import copy
import unittest
sys.path.insert(0,r"[SOMEPATH Here]\GitHub\OpenMDAO")
from distutils.version import LooseVersion
import numpy as np
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
from openmdao.test_suite.components.expl_comp_array import TestExplCompArrayDense
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
# from openmdao.utils.assert_utils import assert_near_equal # NOTE: THIS FUNCTION ISN'T AVAILABLE IN THE PIP INSTALL
from openmdao.utils.general_utils import set_pyoptsparse_opt, run_driver
from openmdao.utils.testing_utils import use_tempdirs
from openmdao.utils.mpi import MPI
_, local_opt = set_pyoptsparse_opt('NSGA2')
if local_opt != 'NSGA2':
raise unittest.SkipTest("pyoptsparse is not providing NSGA2") # CODE BASICALLY FAILS HERE
Error that I am seeing:
"pyoptsparse is not providing NSGA2"
Can I add NSGA 2 if it's not available?
when that test was written, NSGA-II was a little difficult to compile with pyoptsparse. I think there are still some challenges with it, but it mostly works now. As of OpenMDAO V3.0 we're not using NSGA-II for anything internally. But if you get it to work, feel free to send a PR with an updated test!

Python import files from 3 layers

I have the following file structure
home/user/app.py
home/user/content/resource.py
home/user/content/call1.py
home/user/content/call2.py
I have imported resources.py in app.py as below:
import content.resource
Also, I have imported call1 and call2 in resource.py
import call1
import call2
The requirement is to run two tests individually.
run app.py
run resource.py
When I run app.py, it says cannot find call1 and call2.
When run resource.py, the file is running without any issues. How to run app.py python file to call import functions in resource.py and also call1.py and call2.py files?
All the 4 files having __init__ main function.
In your __init__ files, just create a list like this for each init, so for your user __init__: __all__ = ["app", "content"]
And for your content __init__: __all__ = ["resource", "call1", "call2"]
First try: export PYTHONPATH=/home/user<-- Make sure this is the correct absolute path.
If that doesn't solve the issue, try adding content to the path as well.
try: export PYTHONPATH=/home/user/:/home/user/content/
This should definitely work.
You will then import like so:
import user.app
import user.content.resource
NOTE
Whatever you want to use, you must import in every file. Don't bother importing in __init__. Just mention whatever modules that __init__ includes by doing __all__ = []
You have to import call1 and call2 in app.py if you want to call them there.

invalid syntax for unknown reason

I'm doing some django stuffs using vs code and this error in the line 3 ("from") happend for no reason.
from django.urls import path
from * import views
urlpatterns = [path(" ",views,name="home")]
Your second import seems to be incorrect.
The syntax should be:
from <module> import <library / *>
In your case, it should be:
from views import *
That should be correct, as long as the views module do exist and can be found by python.
You cannot import like from *. Python expects a package name after from. If you are trying to import it from the relative path it might be something like
from . import views
Its a valid syntax error, and it did not happen for no-reason!

dask worker cannot import module

I am running a dask cluster and a worker w. 16 cores using the CLI utilities.
In general it seems to work very well.
However, for some reason it will not import modules in the cwd.
I try to run the following from my notebook instance:
def tstimp():
import os
return os.listdir()
c.run(tstimp)
And i get the following output:
{'tcp://192.168.1.90:35885': ['class_positions.csv',
'.gitignore',
'README.md',
'fullrun.ipynb',
'.git',
'rf.py',
'__pycache__',
'dask-worker-space',
'utils.py',
'.ipynb_checkpoints']}
Note that the module rf.py is listed here.
Thus it should be possible to import it in the worker, but when i run the following code:
def tstimp():
import rf
return 42
c.run(tstimp)
I get this error: ModuleNotFoundError: No module named 'rf'
Why am I getting this error?
It seems like the current directory is not added to the python path of the workers.
You should be able to fix this by adding it to the path.
def tstimp():
import sys
sys.path.append('.')
import rf
return 42
c.run(tstimp)

Resources