Related
I've been here:
http://www.python.org/dev/peps/pep-0328/
http://docs.python.org/2/tutorial/modules.html#packages
Python packages: relative imports
python relative import example code does not work
Relative imports in python 2.5
Relative imports in Python
Python: Disabling relative import
and plenty of URLs that I did not copy, some on SO, some on other sites, back when I thought I'd have the solution quickly.
The forever-recurring question is this: how do I solve this "Attempted relative import in non-package" message?
ImportError: attempted relative import with no known parent package
I built an exact replica of the package on pep-0328:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
moduleA.py
The imports were done from the console.
I did make functions named spam and eggs in their appropriate modules. Naturally, it didn't work. The answer is apparently in the 4th URL I listed, but it's all alumni to me. There was this response on one of the URLs I visited:
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.
The above response looks promising, but it's all hieroglyphs to me. So my question, how do I make Python not return to me "Attempted relative import in non-package"? has an answer that involves -m, supposedly.
Can somebody please tell me why Python gives that error message, what it means by "non-package", why and how do you define a 'package', and the precise answer put in terms easy enough for a kindergartener to understand.
Script vs. Module
Here's an explanation. The short version is that there is a big difference between directly running a Python file, and importing that file from somewhere else. Just knowing what directory a file is in does not determine what package Python thinks it is in. That depends, additionally, on how you load the file into Python (by running or by importing).
There are two ways to load a Python file: as the top-level script, or as a
module. A file is loaded as the top-level script if you execute it directly, for instance by typing python myfile.py on the command line. It is loaded as a module when an import statement is encountered inside some other file. There can only be one top-level script at a time; the top-level script is the Python file you ran to start things off.
Naming
When a file is loaded, it is given a name (which is stored in its __name__ attribute).
If it was loaded as the top-level script, its name is __main__.
If it was loaded as a module, its name is [ the filename, preceded by the names of any packages/subpackages of which it is a part, separated by dots ], for example, package.subpackage1.moduleX.
But be aware, if you load moduleX as a module from shell command line using something like python -m package.subpackage1.moduleX, the __name__ will still be __main__.
So for instance in your example:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleA.py
if you imported moduleX (note: imported, not directly executed), its name would be package.subpackage1.moduleX. If you imported moduleA, its name would be package.moduleA. However, if you directly run moduleX from the command line, its name will instead be __main__, and if you directly run moduleA from the command line, its name will be __main__. When a module is run as the top-level script, it loses its normal name and its name is instead __main__.
Accessing a module NOT through its containing package
There is an additional wrinkle: the module's name depends on whether it was imported "directly" from the directory it is in or imported via a package. This only makes a difference if you run Python in a directory, and try to import a file in that same directory (or a subdirectory of it). For instance, if you start the Python interpreter in the directory package/subpackage1 and then do import moduleX, the name of moduleX will just be moduleX, and not package.subpackage1.moduleX. This is because Python adds the current directory to its search path when the interpreter is entered interactively; if it finds the to-be-imported module in the current directory, it will not know that that directory is part of a package, and the package information will not become part of the module's name.
A special case is if you run the interpreter interactively (e.g., just type python and start entering Python code on the fly). In this case, the name of that interactive session is __main__.
Now here is the crucial thing for your error message: if a module's name has no dots, it is not considered to be part of a package. It doesn't matter where the file actually is on disk. All that matters is what its name is, and its name depends on how you loaded it.
Now look at the quote you included in your question:
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top-level module, regardless of where the module is actually located on the file system.
Relative imports...
Relative imports use the module's name to determine where it is in a package. When you use a relative import like from .. import foo, the dots indicate to step up some number of levels in the package hierarchy. For instance, if your current module's name is package.subpackage1.moduleX, then ..moduleA would mean package.moduleA. For a from .. import to work, the module's name must have at least as many dots as there are in the import statement.
... are only relative in a package
However, if your module's name is __main__, it is not considered to be in a package. Its name has no dots, and therefore you cannot use from .. import statements inside it. If you try to do so, you will get the "relative-import in non-package" error.
Scripts can't import relative
What you probably did is you tried to run moduleX or the like from the command line. When you did this, its name was set to __main__, which means that relative imports within it will fail, because its name does not reveal that it is in a package. Note that this will also happen if you run Python from the same directory where a module is, and then try to import that module, because, as described above, Python will find the module in the current directory "too early" without realizing it is part of a package.
Also remember that when you run the interactive interpreter, the "name" of that interactive session is always __main__. Thus you cannot do relative imports directly from an interactive session. Relative imports are only for use within module files.
Two solutions:
If you really do want to run moduleX directly, but you still want it to be considered part of a package, you can do python -m package.subpackage1.moduleX. The -m tells Python to load it as a module, not as the top-level script.
Or perhaps you don't actually want to run moduleX, you just want to run some other script, say myfile.py, that uses functions inside moduleX. If that is the case, put myfile.py somewhere else – not inside the package directory – and run it. If inside myfile.py you do things like from package.moduleA import spam, it will work fine.
Notes
For either of these solutions, the package directory (package in your example) must be accessible from the Python module search path (sys.path). If it is not, you will not be able to use anything in the package reliably at all.
Since Python 2.6, the module's "name" for package-resolution purposes is determined not just by its __name__ attributes but also by the __package__ attribute. That's why I'm avoiding using the explicit symbol __name__ to refer to the module's "name". Since Python 2.6 a module's "name" is effectively __package__ + '.' + __name__, or just __name__ if __package__ is None.)
This is really a problem within python. The origin of confusion is that people mistakenly takes the relative import as path relative which is not.
For example when you write in faa.py:
from .. import foo
This has a meaning only if faa.py was identified and loaded by python, during execution, as a part of a package. In that case,the module's name
for faa.py would be for example some_packagename.faa. If the file was loaded just because it is in the current directory, when python is run, then its name would not refer to any package and eventually relative import would fail.
A simple solution to refer modules in the current directory, is to use this:
if __package__ is None or __package__ == '':
# uses current directory visibility
import foo
else:
# uses current package visibility
from . import foo
There are too much too long anwers in a foreign language. So I'll try to make it short.
If you write from . import module, opposite to what you think, module will not be imported from current directory, but from the top level of your package! If you run .py file as a script, it simply doesn't know where the top level is and thus refuses to work.
If you start it like this py -m package.module from the directory above package, then python knows where the top level is. That's very similar to java: java -cp bin_directory package.class
So after carping about this along with many others, I came across a note posted by Dorian B in this article that solved the specific problem I was having where I would develop modules and classes for use with a web service, but I also want to be able to test them as I'm coding, using the debugger facilities in PyCharm. To run tests in a self-contained class, I would include the following at the end of my class file:
if __name__ == '__main__':
# run test code here...
but if I wanted to import other classes or modules in the same folder, I would then have to change all my import statements from relative notation to local references (i.e. remove the dot (.)) But after reading Dorian's suggestion, I tried his 'one-liner' and it worked! I can now test in PyCharm and leave my test code in place when I use the class in another class under test, or when I use it in my web service!
# import any site-lib modules first, then...
import sys
parent_module = sys.modules['.'.join(__name__.split('.')[:-1]) or '__main__']
if __name__ == '__main__' or parent_module.__name__ == '__main__':
from codex import Codex # these are in same folder as module under test!
from dblogger import DbLogger
else:
from .codex import Codex
from .dblogger import DbLogger
The if statement checks to see if we're running this module as main or if it's being used in another module that's being tested as main. Perhaps this is obvious, but I offer this note here in case anyone else frustrated by the relative import issues above can make use of it.
Here's a general recipe, modified to fit as an example, that I am using right now for dealing with Python libraries written as packages, that contain interdependent files, where I want to be able to test parts of them piecemeal. Let's call this lib.foo and say that it needs access to lib.fileA for functions f1 and f2, and lib.fileB for class Class3.
I have included a few print calls to help illustrate how this works. In practice you would want to remove them (and maybe also the from __future__ import print_function line).
This particular example is too simple to show when we really need to insert an entry into sys.path. (See Lars' answer for a case where we do need it, when we have two or more levels of package directories, and then we use os.path.dirname(os.path.dirname(__file__))—but it doesn't really hurt here either.) It's also safe enough to do this without the if _i in sys.path test. However, if each imported file inserts the same path—for instance, if both fileA and fileB want to import utilities from the package—this clutters up sys.path with the same path many times, so it's nice to have the if _i not in sys.path in the boilerplate.
from __future__ import print_function # only when showing how this works
if __package__:
print('Package named {!r}; __name__ is {!r}'.format(__package__, __name__))
from .fileA import f1, f2
from .fileB import Class3
else:
print('Not a package; __name__ is {!r}'.format(__name__))
# these next steps should be used only with care and if needed
# (remove the sys.path manipulation for simple cases!)
import os, sys
_i = os.path.dirname(os.path.abspath(__file__))
if _i not in sys.path:
print('inserting {!r} into sys.path'.format(_i))
sys.path.insert(0, _i)
else:
print('{!r} is already in sys.path'.format(_i))
del _i # clean up global name space
from fileA import f1, f2
from fileB import Class3
... all the code as usual ...
if __name__ == '__main__':
import doctest, sys
ret = doctest.testmod()
sys.exit(0 if ret.failed == 0 else 1)
The idea here is this (and note that these all function the same across python2.7 and python 3.x):
If run as import lib or from lib import foo as a regular package import from ordinary code, __package is lib and __name__ is lib.foo. We take the first code path, importing from .fileA, etc.
If run as python lib/foo.py, __package__ will be None and __name__ will be __main__.
We take the second code path. The lib directory will already be in sys.path so there is no need to add it. We import from fileA, etc.
If run within the lib directory as python foo.py, the behavior is the same as for case 2.
If run within the lib directory as python -m foo, the behavior is similar to cases 2 and 3. However, the path to the lib directory is not in sys.path, so we add it before importing. The same applies if we run Python and then import foo.
(Since . is in sys.path, we don't really need to add the absolute version of the path here. This is where a deeper package nesting structure, where we want to do from ..otherlib.fileC import ..., makes a difference. If you're not doing this, you can omit all the sys.path manipulation entirely.)
Notes
There is still a quirk. If you run this whole thing from outside:
$ python2 lib.foo
or:
$ python3 lib.foo
the behavior depends on the contents of lib/__init__.py. If that exists and is empty, all is well:
Package named 'lib'; __name__ is '__main__'
But if lib/__init__.py itself imports routine so that it can export routine.name directly as lib.name, you get:
$ python2 lib.foo
Package named 'lib'; __name__ is 'lib.foo'
Package named 'lib'; __name__ is '__main__'
That is, the module gets imported twice, once via the package and then again as __main__ so that it runs your main code. Python 3.6 and later warn about this:
$ python3 lib.routine
Package named 'lib'; __name__ is 'lib.foo'
[...]/runpy.py:125: RuntimeWarning: 'lib.foo' found in sys.modules
after import of package 'lib', but prior to execution of 'lib.foo';
this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Package named 'lib'; __name__ is '__main__'
The warning is new, but the warned-about behavior is not. It is part of what some call the double import trap. (For additional details see issue 27487.) Nick Coghlan says:
This next trap exists in all current versions of Python, including 3.3, and can be summed up in the following general guideline: "Never add a package directory, or any directory inside a package, directly to the Python path".
Note that while we violate that rule here, we do it only when the file being loaded is not being loaded as part of a package, and our modification is specifically designed to allow us to access other files in that package. (And, as I noted, we probably shouldn't do this at all for single level packages.) If we wanted to be extra-clean, we might rewrite this as, e.g.:
import os, sys
_i = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
if _i not in sys.path:
sys.path.insert(0, _i)
else:
_i = None
from sub.fileA import f1, f2
from sub.fileB import Class3
if _i:
sys.path.remove(_i)
del _i
That is, we modify sys.path long enough to achieve our imports, then put it back the way it was (deleting one copy of _i if and only if we added one copy of _i).
Here is one solution that I would not recommend, but might be useful in some situations where modules were simply not generated:
import os
import sys
parent_dir_name = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.append(parent_dir_name + "/your_dir")
import your_script
your_script.a_function()
#BrenBarn's answer says it all, but if you're like me it might take a while to understand. Here's my case and how #BrenBarn's answer applies to it, perhaps it will help you.
The case
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleA.py
Using our familiar example, and add to it that moduleX.py has a relative import to ..moduleA. Given that I tried writing a test script in the subpackage1 directory that imported moduleX, but then got the dreaded error described by the OP.
Solution
Move test script to the same level as package and import package.subpackage1.moduleX
Explanation
As explained, relative imports are made relative to the current name. When my test script imports moduleX from the same directory, then module name inside moduleX is moduleX. When it encounters a relative import the interpreter can't back up the package hierarchy because it's already at the top
When I import moduleX from above, then name inside moduleX is package.subpackage1.moduleX and the relative import can be found
Following up on what Lars has suggested I've wrapped this approach in an experimental, new import library: ultraimport
It gives the programmer more control over imports and it allows file system based imports. Therefore, you can do relative imports from scripts. Parent package not necessary. ultraimports will always work, no matter how you run your code or what is your current working directory because ultraimport makes imports unambiguous. You don't need to change sys.path and also you don't need a try/except block to sometimes do relative imports and sometimes absolute.
You would then write in somefile.py something like:
import ultraimport
foo = ultraimport('__dir__/foo.py')
__dir__ is the directory of somefile.py, the caller of ultraimport(). foo.py would live in the same directory as somefile.py.
One caveat when importing scripts like this is if they contain further relative imports. ultraimport has a builtin preprocessor to rewrite subsequent relative imports to ultraimports so they continue to work. Though, this is currently somewhat limited as original Python imports are ambiguous and there's only so much you can do about it.
I had a similar problem where I didn't want to change the Python module search
path and needed to load a module relatively from a script (in spite of "scripts can't import relative with all" as BrenBarn explained nicely above).
So I used the following hack. Unfortunately, it relies on the imp module that
became deprecated since version 3.4 to be dropped in favour of importlib.
(Is this possible with importlib, too? I don't know.) Still, the hack works for now.
Example for accessing members of moduleX in subpackage1 from a script residing in the subpackage2 folder:
#!/usr/bin/env python3
import inspect
import imp
import os
def get_script_dir(follow_symlinks=True):
"""
Return directory of code defining this very function.
Should work from a module as well as from a script.
"""
script_path = inspect.getabsfile(get_script_dir)
if follow_symlinks:
script_path = os.path.realpath(script_path)
return os.path.dirname(script_path)
# loading the module (hack, relying on deprecated imp-module)
PARENT_PATH = os.path.dirname(get_script_dir())
(x_file, x_path, x_desc) = imp.find_module('moduleX', [PARENT_PATH+'/'+'subpackage1'])
module_x = imp.load_module('subpackage1.moduleX', x_file, x_path, x_desc)
# importing a function and a value
function = module_x.my_function
VALUE = module_x.MY_CONST
A cleaner approach seems to be to modify the sys.path used for loading modules as mentioned by Federico.
#!/usr/bin/env python3
if __name__ == '__main__' and __package__ is None:
from os import sys, path
# __file__ should be defined in this case
PARENT_DIR = path.dirname(path.dirname(path.abspath(__file__)))
sys.path.append(PARENT_DIR)
from subpackage1.moduleX import *
__name__ changes depending on whether the code in question is run in the global namespace or as part of an imported module.
If the code is not running in the global space, __name__ will be the name of the module. If it is running in global namespace -- for example, if you type it into a console, or run the module as a script using python.exe yourscriptnamehere.py then __name__ becomes "__main__".
You'll see a lot of python code with if __name__ == '__main__' is used to test whether the code is being run from the global namespace – that allows you to have a module that doubles as a script.
Did you try to do these imports from the console?
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.
Wrote a little python package to PyPi that might help viewers of this question. The package acts as workaround if one wishes to be able to run python files containing imports containing upper level packages from within a package / project without being directly in the importing file's directory. https://pypi.org/project/import-anywhere/
In most cases when I see the ValueError: attempted relative import beyond top-level package and pull my hair out, the solution is as follows:
You need to step one level higher in the file hierarchy!
#dir/package/module1/foo.py
#dir/package/module2/bar.py
from ..module1 import foo
Importing bar.py when interpreter is started in dir/package/ will result in error despite the import process never going beyond your current directory.
Importing bar.py when interpreter is started in dir/ will succeed.
Similarly for unit tests:
python3 -m unittest discover --start-directory=. successfully works from dir/, but not from dir/package/.
I am running Python 2.5.
This is my folder tree:
ptdraft/
nib.py
simulations/
life/
life.py
(I also have __init__.py in each folder, omitted here for readability)
How do I import the nib module from inside the life module? I am hoping it is possible to do without tinkering with sys.path.
Note: The main module being run is in the ptdraft folder.
You could use relative imports (python >= 2.5):
from ... import nib
(What’s New in Python 2.5) PEP 328: Absolute and Relative Imports
EDIT: added another dot '.' to go up two packages
I posted a similar answer also to the question regarding imports from sibling packages. You can see it here.
Solution without sys.path hacks
Summary
Wrap the code into one folder (e.g. packaged_stuff)
Create a setup.py script where you use setuptools.setup().
Pip install the package in editable state with pip install -e <myproject_folder>
Import using from packaged_stuff.modulename import function_name
Setup
I assume the same folder structure as in the question
.
└── ptdraft
├── __init__.py
├── nib.py
└── simulations
├── __init__.py
└── life
├── __init__.py
└── life.py
I call the . the root folder, and in my case it is located in C:\tmp\test_imports.
Steps
Add a setup.py to the root folder
--
The contents of the setup.py can be simply
from setuptools import setup, find_packages
setup(name='myproject', version='1.0', packages=find_packages())
Basically "any" setup.py would work. This is just a minimal working example.
Use a virtual environment
If you are familiar with virtual environments, activate one, and skip to the next step. Usage of virtual environments are not absolutely required, but they will really help you out in the long run (when you have more than 1 project ongoing..). The most basic steps are (run in the root folder)
Create virtual env
python -m venv venv
Activate virtual env
. venv/bin/activate (Linux) or ./venv/Scripts/activate (Win)
Deactivate virtual env
deactivate (Linux)
To learn more about this, just Google out "python virtualenv tutorial" or similar. You probably never need any other commands than creating, activating and deactivating.
Once you have made and activated a virtual environment, your console should give the name of the virtual environment in parenthesis
PS C:\tmp\test_imports> python -m venv venv
PS C:\tmp\test_imports> .\venv\Scripts\activate
(venv) PS C:\tmp\test_imports>
pip install your project in editable state
Install your top level package myproject using pip. The trick is to use the -e flag when doing the install. This way it is installed in an editable state, and all the edits made to the .py files will be automatically included in the installed package.
In the root directory, run
pip install -e . (note the dot, it stands for "current directory")
You can also see that it is installed by using pip freeze
(venv) PS C:\tmp\test_imports> pip install -e .
Obtaining file:///C:/tmp/test_imports
Installing collected packages: myproject
Running setup.py develop for myproject
Successfully installed myproject
(venv) PS C:\tmp\test_imports> pip freeze
myproject==1.0
Import by prepending mainfolder to every import
In this example, the mainfolder would be ptdraft. This has the advantage that you will not run into name collisions with other module names (from python standard library or 3rd party modules).
Example Usage
nib.py
def function_from_nib():
print('I am the return value from function_from_nib!')
life.py
from ptdraft.nib import function_from_nib
if __name__ == '__main__':
function_from_nib()
Running life.py
(venv) PS C:\tmp\test_imports> python .\ptdraft\simulations\life\life.py
I am the return value from function_from_nib!
Relative imports (as in from .. import mymodule) only work in a package.
To import 'mymodule' that is in the parent directory of your current module:
import os
import sys
import inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import mymodule
edit: the __file__ attribute is not always given. Instead of using os.path.abspath(__file__) I now suggested using the inspect module to retrieve the filename (and path) of the current file
It seems that the problem is not related to the module being in a parent directory or anything like that.
You need to add the directory that contains ptdraft to PYTHONPATH
You said that import nib worked with you, that probably means that you added ptdraft itself (not its parent) to PYTHONPATH.
You can use OS depending path in "module search path" which is listed in sys.path .
So you can easily add parent directory like following
import sys
sys.path.insert(0,'..')
If you want to add parent-parent directory,
sys.path.insert(0,'../..')
This works both in python 2 and 3.
Don't know much about python 2.
In python 3, the parent folder can be added as follows:
import sys
sys.path.append('..')
...and then one is able to import modules from it
If adding your module folder to the PYTHONPATH didn't work, You can modify the sys.path list in your program where the Python interpreter searches for the modules to import, the python documentation says:
When a module named spam is imported, the interpreter first searches for a built-in module with that name. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:
the directory containing the input script (or the current directory).
PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
the installation-dependent default.
After initialization, Python programs can modify sys.path. The directory containing the script being run is placed at the beginning of the search path, ahead of the standard library path. This means that scripts in that directory will be loaded instead of modules of the same name in the library directory. This is an error unless the replacement is intended.
Knowing this, you can do the following in your program:
import sys
# Add the ptdraft folder path to the sys.path list
sys.path.append('/path/to/ptdraft/')
# Now you can import your module
from ptdraft import nib
# Or just
import ptdraft
Here is an answer that's simple so you can see how it works, small and cross-platform.
It only uses built-in modules (os, sys and inspect) so should work
on any operating system (OS) because Python is designed for that.
Shorter code for answer - fewer lines and variables
from inspect import getsourcefile
import os.path as path, sys
current_dir = path.dirname(path.abspath(getsourcefile(lambda:0)))
sys.path.insert(0, current_dir[:current_dir.rfind(path.sep)])
import my_module # Replace "my_module" here with the module name.
sys.path.pop(0)
For less lines than this, replace the second line with import os.path as path, sys, inspect,
add inspect. at the start of getsourcefile (line 3) and remove the first line.
- however this imports all of the module so could need more time, memory and resources.
The code for my answer (longer version)
from inspect import getsourcefile
import os.path
import sys
current_path = os.path.abspath(getsourcefile(lambda:0))
current_dir = os.path.dirname(current_path)
parent_dir = current_dir[:current_dir.rfind(os.path.sep)]
sys.path.insert(0, parent_dir)
import my_module # Replace "my_module" here with the module name.
It uses an example from a Stack Overflow answer How do I get the path of the current
executed file in Python? to find the source (filename) of running code with a built-in tool.
from inspect import getsourcefile
from os.path import abspath
Next, wherever you want to find the source file from you just use:
abspath(getsourcefile(lambda:0))
My code adds a file path to sys.path, the python path list
because this allows Python to import modules from that folder.
After importing a module in the code, it's a good idea to run sys.path.pop(0) on a new line
when that added folder has a module with the same name as another module that is imported
later in the program. You need to remove the list item added before the import, not other paths.
If your program doesn't import other modules, it's safe to not delete the file path because
after a program ends (or restarting the Python shell), any edits made to sys.path disappear.
Notes about a filename variable
My answer doesn't use the __file__ variable to get the file path/filename of running
code because users here have often described it as unreliable. You shouldn't use it
for importing modules from parent folder in programs used by other people.
Some examples where it doesn't work (quote from this Stack Overflow question):
• it can't be found on some platforms • it sometimes isn't the full file path
py2exe doesn't have a __file__ attribute, but there is a workaround
When you run from IDLE with execute() there is no __file__ attribute
OS X 10.6 where I get NameError: global name '__file__' is not defined
Here is more generic solution that includes the parent directory into sys.path (works for me):
import os.path, sys
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), os.pardir))
The pathlib library (included with >= Python 3.4) makes it very concise and intuitive to append the path of the parent directory to the PYTHONPATH:
import sys
from pathlib import Path
sys.path.append(str(Path('.').absolute().parent))
In a Jupyter Notebook (opened with Jupyter LAB or Jupyter Notebook)
As long as you're working in a Jupyter Notebook, this short solution might be useful:
%cd ..
import nib
It works even without an __init__.py file.
I tested it with Anaconda3 on Linux and Windows 7.
I found the following way works for importing a package from the script's parent directory. In the example, I would like to import functions in env.py from app.db package.
.
└── my_application
└── alembic
└── env.py
└── app
├── __init__.py
└── db
import os
import sys
currentdir = os.path.dirname(os.path.realpath(__file__))
parentdir = os.path.dirname(currentdir)
sys.path.append(parentdir)
Above mentioned solutions are also fine. Another solution to this problem is
If you want to import anything from top level directory. Then,
from ...module_name import *
Also, if you want to import any module from the parent directory. Then,
from ..module_name import *
Also, if you want to import any module from the parent directory. Then,
from ...module_name.another_module import *
This way you can import any particular method if you want to.
Two line simplest solution
import os, sys
sys.path.insert(0, os.getcwd())
If parent is your working directory and you want to call another child modules from child scripts.
You can import all child modules from parent directory in any scripts and execute it as
python child_module1/child_script.py
For me the shortest and my favorite oneliner for accessing to the parent directory is:
sys.path.append(os.path.dirname(os.getcwd()))
or:
sys.path.insert(1, os.path.dirname(os.getcwd()))
os.getcwd() returns the name of the current working directory, os.path.dirname(directory_name) returns the directory name for the passed one.
Actually, in my opinion Python project architecture should be done the way where no one module from child directory will use any module from the parent directory. If something like this happens it is worth to rethink about the project tree.
Another way is to add parent directory to PYTHONPATH system environment variable.
Though the original author is probably no longer looking for a solution, but for completeness, there one simple solution. It's to run life.py as a module like this:
cd ptdraft
python -m simulations.life.life
This way you can import anything from nib.py as ptdraft directory is in the path.
I think you can try this in that specific example, but in python 3.6.3
import sys
sys.path.append('../')
same sort of style as the past answer - but in fewer lines :P
import os,sys
parentdir = os.path.dirname(__file__)
sys.path.insert(0,parentdir)
file returns the location you are working in
In a Linux system, you can create a soft link from the "life" folder to the nib.py file. Then, you can simply import it like:
import nib
I have a solution specifically for git-repositories.
First I used sys.path.append('..') and similar solutions. This causes especially problems if you are importing files which are themselves importing files with sys.path.append('..').
I then decided to always append the root directory of the git repository. In one line it would look like this:
sys.path.append(git.Repo('.', search_parent_directories=True).working_tree_dir)
Or in more details like this:
import os
import sys
import git
def get_main_git_root(path):
main_repo_root_dir = git.Repo(path, search_parent_directories=True).working_tree_dir
return main_repo_root_dir
main_repo_root_dir = get_main_git_root('.')
sys.path.append(main_repo_root_dir)
For the original question: Based on what the root directory of the repository is, the import would be
import ptdraft.nib
or
import nib
Our folder structure:
/myproject
project_using_ptdraft/
main.py
ptdraft/
__init__.py
nib.py
simulations/
__init__.py
life/
__init__.py
life.py
The way I understand this is to have a package-centric view.
The package root is ptdraft, since it's the top most level that contains __init__.py
All the files within the package can use absolute paths (that are relative to package root) for imports, for example
in life.py, we have simply:
import ptdraft.nib
However, to run life.py for package dev/testing purposes, instead of python life.py, we need to use:
cd /myproject
python -m ptdraft.simulations.life.life
Note that we didn't need to fiddle with any path at all at this point.
Further confusion is when we complete the ptdraft package, and we want to use it in a driver script, which is necessarily outside of the ptdraft package folder, aka project_using_ptdraft/main.py, we would need to fiddle with paths:
import sys
sys.path.append("/myproject") # folder that contains ptdraft
import ptdraft
import ptdraft.simulations
and use python main.py to run the script without problem.
Helpful links:
https://tenthousandmeters.com/blog/python-behind-the-scenes-11-how-the-python-import-system-works/ (see how __init__.py can be used)
https://chrisyeh96.github.io/2017/08/08/definitive-guide-python-imports.html#running-package-initialization-code
https://stackoverflow.com/a/50392363/2202107
https://stackoverflow.com/a/27876800/2202107
Work with libraries.
Make a library called nib, install it using setup.py, let it reside in site-packages and your problems are solved.
You don't have to stuff everything you make in a single package. Break it up to pieces.
I had a problem where I had to import a Flask application, that had an import that also needed to import files in separate folders. This is partially using Remi's answer, but suppose we had a repository that looks like this:
.
└── service
└── misc
└── categories.csv
└── test
└── app_test.py
app.py
pipeline.py
Then before importing the app object from the app.py file, we change the directory one level up, so when we import the app (which imports the pipeline.py), we can also read in miscellaneous files like a csv file.
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
os.chdir('../')
from app import app
After having imported the Flask app, you can use os.chdir('./test') so that your working directory is not changed.
It's seems to me that you don't really need to import the parent module. Let's imagine that in nib.py you have func1() and data1, you need to use in life.py
nib.py
import simulations.life.life as life
def func1():
pass
data1 = {}
life.share(func1, data1)
life.py
func1 = data1 = None
def share(*args):
global func1, data1
func1, data1 = args
And now you have the access to func1 and data in life.py. Of course you have to be careful to populate them in life.py before you try to use them,
I made this library to do this.
https://github.com/fx-kirin/add_parent_path
# Just add parent path
add_parent_path(1)
# Append to syspath and delete when the exist of with statement.
with add_parent_path(1):
# Import modules in the parent path
pass
This is the simplest solution that works for me:
from ptdraft import nib
After removing some sys path hacks, I thought it might be valuable to add
My preferred solution.
Note: this is a frame challenge - it's not necessary to do in-code.
Assuming a tree,
project
└── pkg
└── test.py
Where test.py contains
import sys, json; print(json.dumps(sys.path, indent=2))
Executing using the path only includes the package directory
python pkg/test.py
[
"/project/pkg",
...
]
But using the module argument includes the project directory
python -m pkg.test
[
"/project",
...
]
Now, all imports can be absolute, from the project directory. No further skullduggery required.
Although it is against all rules, I still want to mention this possibility:
You can first copy the file from the parent directory to the child directory. Next import it and subsequently remove the copied file:
for example in life.py:
import os
import shutil
shutil.copy('../nib.py', '.')
import nib
os.remove('nib.py')
# now you can use it just fine:
nib.foo()
Of course there might arise several problems when nibs tries to import/read other files with relative imports/paths.
This works for me to import things from a higher folder.
import os
os.chdir('..')
So I builded a python package localy:
cgi#cgires:~$ pip list | grep mads
madscgi 0.1.0
Its nice! Afterwards I can use it in Jupyter Notebook, in iPython Shell, in Python Shell and even in python scripts outside the modules code. So it works as expected 100% outside the modules code:
Thats nice, but next I want to import code from one builded module (inside the package) into another python file (inside the package). Lets name it import_test.py and try it out:
So it fails if it is getting executed in the directory, where the package is build from. And it looks like, that the python interpreter is taking the parent directory (with the same name like the module) and this is failing.
Is is possible to enforce the usage of the installed pip-package?
As #MisterMiyagi pointed out, the problem was, that there were an upper folder which had the same name as the module.
Here: mads_cons is the upper folder from import_test.py. Therefore, the upper folder is getting imported instead of the via pip installed module. Thats it.
The file you want to import should either be in the same folder or referred to with the absolute path of it.
If that doesn't suit you, you can call sys.path
import sys
sys.path
You can keep your file in any of the directories sys.path returns.
Smart would be, if you keep the file inside.
......../site-packages/
Currently trying to work in Python3 and use absolute imports to import one module into another but I get the error ModuleNotFoundError: No module named '__main__.moduleB'; '__main__' is not a package. Consider this project structure:
proj
__init__.py3 (empty)
moduleA.py3
moduleB.py3
moduleA.py3
from .moduleB import ModuleB
ModuleB.hello()
moduleB.py3
class ModuleB:
def hello():
print("hello world")
Then running python3 moduleA.py3 gives the error. What needs to be changed here?
.moduleB is a relative import. Relative only works when the parent module is imported or loaded first. That means you need to have proj imported somewhere in your current runtime environment. When you are are using command python3 moduleA.py3, it is getting no chance to import parent module. You can:
from proj.moduleB import moduleB OR
You can create another script, let's say run.py, to invoke from proj import moduleA
Good luck with your journey to the awesome land of Python.
Foreword
I'm developing a project which in fact is a Python package that can be installed through pip, but it also exposes a command line interface. I don't have problems running my project after installing it with pip install ., but hey, who does this every time after changing something in one of the project files? I needed to run the whole thing through simple python mypackage/main.py.
/my-project
- README.md
- setup.py
/mypackage
- __init__.py
- main.py
- common.py
The different faces of the same problem
I tried importing a few functions in main.py from my common.py module. I tried different configurations that gave different errors, and I want to share with you with my observations and leave a quick note for future me as well.
Relative import
The first what I tried was a relative import:
from .common import my_func
I ran my application with simple: python mypackage/main.py. Unfortunately this gave the following error:
ModuleNotFoundError: No module named '__main__.common'; '__main__' is not a package
The cause of this problem is that the main.py was executed directly by python command, thus becoming the main module named __main__. If we connect this information with the relative import we used, we get what we have in the error message: __main__.common. This is explained in the Python documentation:
Note that relative imports are based on the name of the current module. Since the name of the main module is always __main__, modules intended for use as the main module of a Python application must always use absolute imports.
When I installed my package with pip install . and then ran it, it worked perfectly fine. I was also able to import mypackage.main module in a Python console. So it looks like there's a problem only with running it directly.
Absolute import
Let's follow the advise from the documentation and change the import statement to something different:
from common import my_func
If we now try to run this as before: python mypackage/main.py, then it works as expected! But, there's a caveat when you, like me, develop something that need to work as a standalone command line tool after installing it with pip. I installed my package with pip install . and then tried to run it...
ModuleNotFoundError: No module named 'common'
What's worse, when I opened a Python console, and tried to import the main module manually (import mypackage.main), then I got the same error as above. The reason for that is simple: common is no longer a relative import, so Python tries to find it in installed packages. We don't have such package, that's why it fails.
The solution with an absolute import works well only when you create a typical Python app that is executed with a python command.
Import with a package name
There is also a third possibility to import the common module:
from mypackage.common import my_func
This is not very different from the relative import approach, as long as we do it from the context of mypackage. And again, trying to run this with python mypackage/main.py ends similar:
ModuleNotFoundError: No module named 'mypackage'
How irritating that could be, the interpreter is right, you don't have such package installed.
The solution
For simple Python apps
Just use absolute imports (without the dot), and everything will be fine.
For installable Python apps in development
Use relative imports, or imports with a package name on the beginning, because you need them like this when your app is installed. When it comes to running such module in development, Python can be executed with the -m option:
-m mod : run library module as a script (terminates option list)
So instead of python mypackage/main.py, do it like this: python -m mypackage.main.
In addition to md-sabuj-sarker's answer, there is a really good example in the Python modules documentation.
This is what the docs say about intra-package-references:
Note that relative imports are based on the name of the current module. Since the name of the main module is always "__main__", modules intended for use as the main module of a Python application must always use absolute imports.
If you run python3 moduleA.py3, moduleA is used as the main module, so using the absolute import looks like the right thing to do.
However, beware that this absolute import (from package.module import something) fails if, for some reason, the package contains a module file with the same name as the package (at least, on my Python 3.7). So, for example, it would fail if you have (using the OP's example):
proj/
__init__.py (empty)
proj.py (same name as package)
moduleA.py
moduleB.py
in which case you would get:
ModuleNotFoundError: No module named 'proj.moduleB'; 'proj' is not a package
Alternatively, you could remove the . in from .moduleB import, as suggested here and here, which seems to work, although my PyCharm (2018.2.4) marks this as an "Unresolved reference" and fails to autocomplete.
Maybe you can do this before importing the module:
moduleA.py3
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from moduleB import ModuleB
ModuleB.hello()
Add the current directory to the environment directory
Just rename the file from where you run the app to main.py:
from app import app
if __name__ == '__main__':
app.run()
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
will solve the issue of import path issue.
Professional Python newbie here. I have created a Python module called aviation with a file in there called database.py.
I have another module called core, with a file in there called calculator.py.
I want to import aviation.database.py into my calculator.py.
The basic structure is as follows:
My project
aviation (module)
- database.py
core (module)
- calculator.py
test.py
My calculator.py file has an import such as:
from aviation import database as aviation_database
This module is not recognised and I get a red squiggly line indicating as much.
If I create another file test.py outside of aviation and core and add the above import, there are no issues in this tests.py file - the import works fine.
It appears that I need to do something so that my module can import from another module... it does allow me to import installed modules (like date), but I have no idea what I am missing.
I am using the IntelliJ IDE and my code is located in the regular C:\Users\\IdeaProjects directory.
Can someone tell me what I should do and why I am facing this problem?
Well I would recommend you to create __init__ file in the modules.
The python3 doc states that :
The __init__.py files are required to make Python treat the
directories as containing packages; this is done to prevent
directories with a common name, such as string, from unintentionally
hiding valid modules that occur later on the module search path. In
the simplest case, init.py can just be an empty file, but it can
also execute initialization code for the package or set the all
variable, described later.
So your Directory Structure becomes
My project
aviation (module)
- database.py
- __init__.py
core (module)
- calculator.py
- __init__.py
test.py
Now import all the thing in your __init__ file that you want to us from other packages. Like in aviation's __init__.py file write
import database
Similarly, for core's __init__.py
Now in calculator.py you can import them by -
from aviation import database
Let me know, if this help !
Thanks
Its as simple as this.
For instance, you have an already written code e.g myname.py and you want to import the entire module into new code e.g newwork.py
step 1. open your code newwork.py
step 2. just as you do other import, just write:
import myname
myname
Thats all.
Now, you have your entire nyname code inside newwork. When you run newwork, it runs nymame alongside.
I hope this help somebody?