Sublime Text 3 + Build (ctrl + b) [duplicate] - python-3.x

I am running Python 2.5.
This is my folder tree:
ptdraft/
nib.py
simulations/
life/
life.py
(I also have __init__.py in each folder, omitted here for readability)
How do I import the nib module from inside the life module? I am hoping it is possible to do without tinkering with sys.path.
Note: The main module being run is in the ptdraft folder.

You could use relative imports (python >= 2.5):
from ... import nib
(What’s New in Python 2.5) PEP 328: Absolute and Relative Imports
EDIT: added another dot '.' to go up two packages

I posted a similar answer also to the question regarding imports from sibling packages. You can see it here.
Solution without sys.path hacks
Summary
Wrap the code into one folder (e.g. packaged_stuff)
Create a setup.py script where you use setuptools.setup().
Pip install the package in editable state with pip install -e <myproject_folder>
Import using from packaged_stuff.modulename import function_name
Setup
I assume the same folder structure as in the question
.
└── ptdraft
├── __init__.py
├── nib.py
└── simulations
├── __init__.py
└── life
├── __init__.py
└── life.py
I call the . the root folder, and in my case it is located in C:\tmp\test_imports.
Steps
Add a setup.py to the root folder
--
The contents of the setup.py can be simply
from setuptools import setup, find_packages
setup(name='myproject', version='1.0', packages=find_packages())
Basically "any" setup.py would work. This is just a minimal working example.
Use a virtual environment
If you are familiar with virtual environments, activate one, and skip to the next step. Usage of virtual environments are not absolutely required, but they will really help you out in the long run (when you have more than 1 project ongoing..). The most basic steps are (run in the root folder)
Create virtual env
python -m venv venv
Activate virtual env
. venv/bin/activate (Linux) or ./venv/Scripts/activate (Win)
Deactivate virtual env
deactivate (Linux)
To learn more about this, just Google out "python virtualenv tutorial" or similar. You probably never need any other commands than creating, activating and deactivating.
Once you have made and activated a virtual environment, your console should give the name of the virtual environment in parenthesis
PS C:\tmp\test_imports> python -m venv venv
PS C:\tmp\test_imports> .\venv\Scripts\activate
(venv) PS C:\tmp\test_imports>
pip install your project in editable state
Install your top level package myproject using pip. The trick is to use the -e flag when doing the install. This way it is installed in an editable state, and all the edits made to the .py files will be automatically included in the installed package.
In the root directory, run
pip install -e . (note the dot, it stands for "current directory")
You can also see that it is installed by using pip freeze
(venv) PS C:\tmp\test_imports> pip install -e .
Obtaining file:///C:/tmp/test_imports
Installing collected packages: myproject
Running setup.py develop for myproject
Successfully installed myproject
(venv) PS C:\tmp\test_imports> pip freeze
myproject==1.0
Import by prepending mainfolder to every import
In this example, the mainfolder would be ptdraft. This has the advantage that you will not run into name collisions with other module names (from python standard library or 3rd party modules).
Example Usage
nib.py
def function_from_nib():
print('I am the return value from function_from_nib!')
life.py
from ptdraft.nib import function_from_nib
if __name__ == '__main__':
function_from_nib()
Running life.py
(venv) PS C:\tmp\test_imports> python .\ptdraft\simulations\life\life.py
I am the return value from function_from_nib!

Relative imports (as in from .. import mymodule) only work in a package.
To import 'mymodule' that is in the parent directory of your current module:
import os
import sys
import inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import mymodule
edit: the __file__ attribute is not always given. Instead of using os.path.abspath(__file__) I now suggested using the inspect module to retrieve the filename (and path) of the current file

It seems that the problem is not related to the module being in a parent directory or anything like that.
You need to add the directory that contains ptdraft to PYTHONPATH
You said that import nib worked with you, that probably means that you added ptdraft itself (not its parent) to PYTHONPATH.

You can use OS depending path in "module search path" which is listed in sys.path .
So you can easily add parent directory like following
import sys
sys.path.insert(0,'..')
If you want to add parent-parent directory,
sys.path.insert(0,'../..')
This works both in python 2 and 3.

Don't know much about python 2.
In python 3, the parent folder can be added as follows:
import sys
sys.path.append('..')
...and then one is able to import modules from it

If adding your module folder to the PYTHONPATH didn't work, You can modify the sys.path list in your program where the Python interpreter searches for the modules to import, the python documentation says:
When a module named spam is imported, the interpreter first searches for a built-in module with that name. If not found, it then searches for a file named spam.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:
the directory containing the input script (or the current directory).
PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
the installation-dependent default.
After initialization, Python programs can modify sys.path. The directory containing the script being run is placed at the beginning of the search path, ahead of the standard library path. This means that scripts in that directory will be loaded instead of modules of the same name in the library directory. This is an error unless the replacement is intended.
Knowing this, you can do the following in your program:
import sys
# Add the ptdraft folder path to the sys.path list
sys.path.append('/path/to/ptdraft/')
# Now you can import your module
from ptdraft import nib
# Or just
import ptdraft

Here is an answer that's simple so you can see how it works, small and cross-platform.
It only uses built-in modules (os, sys and inspect) so should work
on any operating system (OS) because Python is designed for that.
Shorter code for answer - fewer lines and variables
from inspect import getsourcefile
import os.path as path, sys
current_dir = path.dirname(path.abspath(getsourcefile(lambda:0)))
sys.path.insert(0, current_dir[:current_dir.rfind(path.sep)])
import my_module # Replace "my_module" here with the module name.
sys.path.pop(0)
For less lines than this, replace the second line with import os.path as path, sys, inspect,
add inspect. at the start of getsourcefile (line 3) and remove the first line.
- however this imports all of the module so could need more time, memory and resources.
The code for my answer (longer version)
from inspect import getsourcefile
import os.path
import sys
current_path = os.path.abspath(getsourcefile(lambda:0))
current_dir = os.path.dirname(current_path)
parent_dir = current_dir[:current_dir.rfind(os.path.sep)]
sys.path.insert(0, parent_dir)
import my_module # Replace "my_module" here with the module name.
It uses an example from a Stack Overflow answer How do I get the path of the current
executed file in Python? to find the source (filename) of running code with a built-in tool.
from inspect import getsourcefile
from os.path import abspath
Next, wherever you want to find the source file from you just use:
abspath(getsourcefile(lambda:0))
My code adds a file path to sys.path, the python path list
because this allows Python to import modules from that folder.
After importing a module in the code, it's a good idea to run sys.path.pop(0) on a new line
when that added folder has a module with the same name as another module that is imported
later in the program. You need to remove the list item added before the import, not other paths.
If your program doesn't import other modules, it's safe to not delete the file path because
after a program ends (or restarting the Python shell), any edits made to sys.path disappear.
Notes about a filename variable
My answer doesn't use the __file__ variable to get the file path/filename of running
code because users here have often described it as unreliable. You shouldn't use it
for importing modules from parent folder in programs used by other people.
Some examples where it doesn't work (quote from this Stack Overflow question):
• it can't be found on some platforms • it sometimes isn't the full file path
py2exe doesn't have a __file__ attribute, but there is a workaround
When you run from IDLE with execute() there is no __file__ attribute
OS X 10.6 where I get NameError: global name '__file__' is not defined

Here is more generic solution that includes the parent directory into sys.path (works for me):
import os.path, sys
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), os.pardir))

The pathlib library (included with >= Python 3.4) makes it very concise and intuitive to append the path of the parent directory to the PYTHONPATH:
import sys
from pathlib import Path
sys.path.append(str(Path('.').absolute().parent))

In a Jupyter Notebook (opened with Jupyter LAB or Jupyter Notebook)
As long as you're working in a Jupyter Notebook, this short solution might be useful:
%cd ..
import nib
It works even without an __init__.py file.
I tested it with Anaconda3 on Linux and Windows 7.

I found the following way works for importing a package from the script's parent directory. In the example, I would like to import functions in env.py from app.db package.
.
└── my_application
└── alembic
└── env.py
└── app
├── __init__.py
└── db
import os
import sys
currentdir = os.path.dirname(os.path.realpath(__file__))
parentdir = os.path.dirname(currentdir)
sys.path.append(parentdir)

Above mentioned solutions are also fine. Another solution to this problem is
If you want to import anything from top level directory. Then,
from ...module_name import *
Also, if you want to import any module from the parent directory. Then,
from ..module_name import *
Also, if you want to import any module from the parent directory. Then,
from ...module_name.another_module import *
This way you can import any particular method if you want to.

Two line simplest solution
import os, sys
sys.path.insert(0, os.getcwd())
If parent is your working directory and you want to call another child modules from child scripts.
You can import all child modules from parent directory in any scripts and execute it as
python child_module1/child_script.py

For me the shortest and my favorite oneliner for accessing to the parent directory is:
sys.path.append(os.path.dirname(os.getcwd()))
or:
sys.path.insert(1, os.path.dirname(os.getcwd()))
os.getcwd() returns the name of the current working directory, os.path.dirname(directory_name) returns the directory name for the passed one.
Actually, in my opinion Python project architecture should be done the way where no one module from child directory will use any module from the parent directory. If something like this happens it is worth to rethink about the project tree.
Another way is to add parent directory to PYTHONPATH system environment variable.

Though the original author is probably no longer looking for a solution, but for completeness, there one simple solution. It's to run life.py as a module like this:
cd ptdraft
python -m simulations.life.life
This way you can import anything from nib.py as ptdraft directory is in the path.

I think you can try this in that specific example, but in python 3.6.3

import sys
sys.path.append('../')

same sort of style as the past answer - but in fewer lines :P
import os,sys
parentdir = os.path.dirname(__file__)
sys.path.insert(0,parentdir)
file returns the location you are working in

In a Linux system, you can create a soft link from the "life" folder to the nib.py file. Then, you can simply import it like:
import nib

I have a solution specifically for git-repositories.
First I used sys.path.append('..') and similar solutions. This causes especially problems if you are importing files which are themselves importing files with sys.path.append('..').
I then decided to always append the root directory of the git repository. In one line it would look like this:
sys.path.append(git.Repo('.', search_parent_directories=True).working_tree_dir)
Or in more details like this:
import os
import sys
import git
def get_main_git_root(path):
main_repo_root_dir = git.Repo(path, search_parent_directories=True).working_tree_dir
return main_repo_root_dir
main_repo_root_dir = get_main_git_root('.')
sys.path.append(main_repo_root_dir)
For the original question: Based on what the root directory of the repository is, the import would be
import ptdraft.nib
or
import nib

Our folder structure:
/myproject
project_using_ptdraft/
main.py
ptdraft/
__init__.py
nib.py
simulations/
__init__.py
life/
__init__.py
life.py
The way I understand this is to have a package-centric view.
The package root is ptdraft, since it's the top most level that contains __init__.py
All the files within the package can use absolute paths (that are relative to package root) for imports, for example
in life.py, we have simply:
import ptdraft.nib
However, to run life.py for package dev/testing purposes, instead of python life.py, we need to use:
cd /myproject
python -m ptdraft.simulations.life.life
Note that we didn't need to fiddle with any path at all at this point.
Further confusion is when we complete the ptdraft package, and we want to use it in a driver script, which is necessarily outside of the ptdraft package folder, aka project_using_ptdraft/main.py, we would need to fiddle with paths:
import sys
sys.path.append("/myproject") # folder that contains ptdraft
import ptdraft
import ptdraft.simulations
and use python main.py to run the script without problem.
Helpful links:
https://tenthousandmeters.com/blog/python-behind-the-scenes-11-how-the-python-import-system-works/ (see how __init__.py can be used)
https://chrisyeh96.github.io/2017/08/08/definitive-guide-python-imports.html#running-package-initialization-code
https://stackoverflow.com/a/50392363/2202107
https://stackoverflow.com/a/27876800/2202107

Work with libraries.
Make a library called nib, install it using setup.py, let it reside in site-packages and your problems are solved.
You don't have to stuff everything you make in a single package. Break it up to pieces.

I had a problem where I had to import a Flask application, that had an import that also needed to import files in separate folders. This is partially using Remi's answer, but suppose we had a repository that looks like this:
.
└── service
└── misc
└── categories.csv
└── test
└── app_test.py
app.py
pipeline.py
Then before importing the app object from the app.py file, we change the directory one level up, so when we import the app (which imports the pipeline.py), we can also read in miscellaneous files like a csv file.
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
os.chdir('../')
from app import app
After having imported the Flask app, you can use os.chdir('./test') so that your working directory is not changed.

It's seems to me that you don't really need to import the parent module. Let's imagine that in nib.py you have func1() and data1, you need to use in life.py
nib.py
import simulations.life.life as life
def func1():
pass
data1 = {}
life.share(func1, data1)
life.py
func1 = data1 = None
def share(*args):
global func1, data1
func1, data1 = args
And now you have the access to func1 and data in life.py. Of course you have to be careful to populate them in life.py before you try to use them,

I made this library to do this.
https://github.com/fx-kirin/add_parent_path
# Just add parent path
add_parent_path(1)
# Append to syspath and delete when the exist of with statement.
with add_parent_path(1):
# Import modules in the parent path
pass

This is the simplest solution that works for me:
from ptdraft import nib

After removing some sys path hacks, I thought it might be valuable to add
My preferred solution.
Note: this is a frame challenge - it's not necessary to do in-code.
Assuming a tree,
project
└── pkg
└── test.py
Where test.py contains
import sys, json; print(json.dumps(sys.path, indent=2))
Executing using the path only includes the package directory
python pkg/test.py
[
"/project/pkg",
...
]
But using the module argument includes the project directory
python -m pkg.test
[
"/project",
...
]
Now, all imports can be absolute, from the project directory. No further skullduggery required.

Although it is against all rules, I still want to mention this possibility:
You can first copy the file from the parent directory to the child directory. Next import it and subsequently remove the copied file:
for example in life.py:
import os
import shutil
shutil.copy('../nib.py', '.')
import nib
os.remove('nib.py')
# now you can use it just fine:
nib.foo()
Of course there might arise several problems when nibs tries to import/read other files with relative imports/paths.

This works for me to import things from a higher folder.
import os
os.chdir('..')

Related

Importing modules from top-level to lower-level files (for the thousandth time) [duplicate]

I've been here:
http://www.python.org/dev/peps/pep-0328/
http://docs.python.org/2/tutorial/modules.html#packages
Python packages: relative imports
python relative import example code does not work
Relative imports in python 2.5
Relative imports in Python
Python: Disabling relative import
and plenty of URLs that I did not copy, some on SO, some on other sites, back when I thought I'd have the solution quickly.
The forever-recurring question is this: how do I solve this "Attempted relative import in non-package" message?
ImportError: attempted relative import with no known parent package
I built an exact replica of the package on pep-0328:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
moduleA.py
The imports were done from the console.
I did make functions named spam and eggs in their appropriate modules. Naturally, it didn't work. The answer is apparently in the 4th URL I listed, but it's all alumni to me. There was this response on one of the URLs I visited:
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.
The above response looks promising, but it's all hieroglyphs to me. So my question, how do I make Python not return to me "Attempted relative import in non-package"? has an answer that involves -m, supposedly.
Can somebody please tell me why Python gives that error message, what it means by "non-package", why and how do you define a 'package', and the precise answer put in terms easy enough for a kindergartener to understand.
Script vs. Module
Here's an explanation. The short version is that there is a big difference between directly running a Python file, and importing that file from somewhere else. Just knowing what directory a file is in does not determine what package Python thinks it is in. That depends, additionally, on how you load the file into Python (by running or by importing).
There are two ways to load a Python file: as the top-level script, or as a
module. A file is loaded as the top-level script if you execute it directly, for instance by typing python myfile.py on the command line. It is loaded as a module when an import statement is encountered inside some other file. There can only be one top-level script at a time; the top-level script is the Python file you ran to start things off.
Naming
When a file is loaded, it is given a name (which is stored in its __name__ attribute).
If it was loaded as the top-level script, its name is __main__.
If it was loaded as a module, its name is [ the filename, preceded by the names of any packages/subpackages of which it is a part, separated by dots ], for example, package.subpackage1.moduleX.
But be aware, if you load moduleX as a module from shell command line using something like python -m package.subpackage1.moduleX, the __name__ will still be __main__.
So for instance in your example:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleA.py
if you imported moduleX (note: imported, not directly executed), its name would be package.subpackage1.moduleX. If you imported moduleA, its name would be package.moduleA. However, if you directly run moduleX from the command line, its name will instead be __main__, and if you directly run moduleA from the command line, its name will be __main__. When a module is run as the top-level script, it loses its normal name and its name is instead __main__.
Accessing a module NOT through its containing package
There is an additional wrinkle: the module's name depends on whether it was imported "directly" from the directory it is in or imported via a package. This only makes a difference if you run Python in a directory, and try to import a file in that same directory (or a subdirectory of it). For instance, if you start the Python interpreter in the directory package/subpackage1 and then do import moduleX, the name of moduleX will just be moduleX, and not package.subpackage1.moduleX. This is because Python adds the current directory to its search path when the interpreter is entered interactively; if it finds the to-be-imported module in the current directory, it will not know that that directory is part of a package, and the package information will not become part of the module's name.
A special case is if you run the interpreter interactively (e.g., just type python and start entering Python code on the fly). In this case, the name of that interactive session is __main__.
Now here is the crucial thing for your error message: if a module's name has no dots, it is not considered to be part of a package. It doesn't matter where the file actually is on disk. All that matters is what its name is, and its name depends on how you loaded it.
Now look at the quote you included in your question:
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top-level module, regardless of where the module is actually located on the file system.
Relative imports...
Relative imports use the module's name to determine where it is in a package. When you use a relative import like from .. import foo, the dots indicate to step up some number of levels in the package hierarchy. For instance, if your current module's name is package.subpackage1.moduleX, then ..moduleA would mean package.moduleA. For a from .. import to work, the module's name must have at least as many dots as there are in the import statement.
... are only relative in a package
However, if your module's name is __main__, it is not considered to be in a package. Its name has no dots, and therefore you cannot use from .. import statements inside it. If you try to do so, you will get the "relative-import in non-package" error.
Scripts can't import relative
What you probably did is you tried to run moduleX or the like from the command line. When you did this, its name was set to __main__, which means that relative imports within it will fail, because its name does not reveal that it is in a package. Note that this will also happen if you run Python from the same directory where a module is, and then try to import that module, because, as described above, Python will find the module in the current directory "too early" without realizing it is part of a package.
Also remember that when you run the interactive interpreter, the "name" of that interactive session is always __main__. Thus you cannot do relative imports directly from an interactive session. Relative imports are only for use within module files.
Two solutions:
If you really do want to run moduleX directly, but you still want it to be considered part of a package, you can do python -m package.subpackage1.moduleX. The -m tells Python to load it as a module, not as the top-level script.
Or perhaps you don't actually want to run moduleX, you just want to run some other script, say myfile.py, that uses functions inside moduleX. If that is the case, put myfile.py somewhere else – not inside the package directory – and run it. If inside myfile.py you do things like from package.moduleA import spam, it will work fine.
Notes
For either of these solutions, the package directory (package in your example) must be accessible from the Python module search path (sys.path). If it is not, you will not be able to use anything in the package reliably at all.
Since Python 2.6, the module's "name" for package-resolution purposes is determined not just by its __name__ attributes but also by the __package__ attribute. That's why I'm avoiding using the explicit symbol __name__ to refer to the module's "name". Since Python 2.6 a module's "name" is effectively __package__ + '.' + __name__, or just __name__ if __package__ is None.)
This is really a problem within python. The origin of confusion is that people mistakenly takes the relative import as path relative which is not.
For example when you write in faa.py:
from .. import foo
This has a meaning only if faa.py was identified and loaded by python, during execution, as a part of a package. In that case,the module's name
for faa.py would be for example some_packagename.faa. If the file was loaded just because it is in the current directory, when python is run, then its name would not refer to any package and eventually relative import would fail.
A simple solution to refer modules in the current directory, is to use this:
if __package__ is None or __package__ == '':
# uses current directory visibility
import foo
else:
# uses current package visibility
from . import foo
There are too much too long anwers in a foreign language. So I'll try to make it short.
If you write from . import module, opposite to what you think, module will not be imported from current directory, but from the top level of your package! If you run .py file as a script, it simply doesn't know where the top level is and thus refuses to work.
If you start it like this py -m package.module from the directory above package, then python knows where the top level is. That's very similar to java: java -cp bin_directory package.class
So after carping about this along with many others, I came across a note posted by Dorian B in this article that solved the specific problem I was having where I would develop modules and classes for use with a web service, but I also want to be able to test them as I'm coding, using the debugger facilities in PyCharm. To run tests in a self-contained class, I would include the following at the end of my class file:
if __name__ == '__main__':
# run test code here...
but if I wanted to import other classes or modules in the same folder, I would then have to change all my import statements from relative notation to local references (i.e. remove the dot (.)) But after reading Dorian's suggestion, I tried his 'one-liner' and it worked! I can now test in PyCharm and leave my test code in place when I use the class in another class under test, or when I use it in my web service!
# import any site-lib modules first, then...
import sys
parent_module = sys.modules['.'.join(__name__.split('.')[:-1]) or '__main__']
if __name__ == '__main__' or parent_module.__name__ == '__main__':
from codex import Codex # these are in same folder as module under test!
from dblogger import DbLogger
else:
from .codex import Codex
from .dblogger import DbLogger
The if statement checks to see if we're running this module as main or if it's being used in another module that's being tested as main. Perhaps this is obvious, but I offer this note here in case anyone else frustrated by the relative import issues above can make use of it.
Here's a general recipe, modified to fit as an example, that I am using right now for dealing with Python libraries written as packages, that contain interdependent files, where I want to be able to test parts of them piecemeal. Let's call this lib.foo and say that it needs access to lib.fileA for functions f1 and f2, and lib.fileB for class Class3.
I have included a few print calls to help illustrate how this works. In practice you would want to remove them (and maybe also the from __future__ import print_function line).
This particular example is too simple to show when we really need to insert an entry into sys.path. (See Lars' answer for a case where we do need it, when we have two or more levels of package directories, and then we use os.path.dirname(os.path.dirname(__file__))—but it doesn't really hurt here either.) It's also safe enough to do this without the if _i in sys.path test. However, if each imported file inserts the same path—for instance, if both fileA and fileB want to import utilities from the package—this clutters up sys.path with the same path many times, so it's nice to have the if _i not in sys.path in the boilerplate.
from __future__ import print_function # only when showing how this works
if __package__:
print('Package named {!r}; __name__ is {!r}'.format(__package__, __name__))
from .fileA import f1, f2
from .fileB import Class3
else:
print('Not a package; __name__ is {!r}'.format(__name__))
# these next steps should be used only with care and if needed
# (remove the sys.path manipulation for simple cases!)
import os, sys
_i = os.path.dirname(os.path.abspath(__file__))
if _i not in sys.path:
print('inserting {!r} into sys.path'.format(_i))
sys.path.insert(0, _i)
else:
print('{!r} is already in sys.path'.format(_i))
del _i # clean up global name space
from fileA import f1, f2
from fileB import Class3
... all the code as usual ...
if __name__ == '__main__':
import doctest, sys
ret = doctest.testmod()
sys.exit(0 if ret.failed == 0 else 1)
The idea here is this (and note that these all function the same across python2.7 and python 3.x):
If run as import lib or from lib import foo as a regular package import from ordinary code, __package is lib and __name__ is lib.foo. We take the first code path, importing from .fileA, etc.
If run as python lib/foo.py, __package__ will be None and __name__ will be __main__.
We take the second code path. The lib directory will already be in sys.path so there is no need to add it. We import from fileA, etc.
If run within the lib directory as python foo.py, the behavior is the same as for case 2.
If run within the lib directory as python -m foo, the behavior is similar to cases 2 and 3. However, the path to the lib directory is not in sys.path, so we add it before importing. The same applies if we run Python and then import foo.
(Since . is in sys.path, we don't really need to add the absolute version of the path here. This is where a deeper package nesting structure, where we want to do from ..otherlib.fileC import ..., makes a difference. If you're not doing this, you can omit all the sys.path manipulation entirely.)
Notes
There is still a quirk. If you run this whole thing from outside:
$ python2 lib.foo
or:
$ python3 lib.foo
the behavior depends on the contents of lib/__init__.py. If that exists and is empty, all is well:
Package named 'lib'; __name__ is '__main__'
But if lib/__init__.py itself imports routine so that it can export routine.name directly as lib.name, you get:
$ python2 lib.foo
Package named 'lib'; __name__ is 'lib.foo'
Package named 'lib'; __name__ is '__main__'
That is, the module gets imported twice, once via the package and then again as __main__ so that it runs your main code. Python 3.6 and later warn about this:
$ python3 lib.routine
Package named 'lib'; __name__ is 'lib.foo'
[...]/runpy.py:125: RuntimeWarning: 'lib.foo' found in sys.modules
after import of package 'lib', but prior to execution of 'lib.foo';
this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Package named 'lib'; __name__ is '__main__'
The warning is new, but the warned-about behavior is not. It is part of what some call the double import trap. (For additional details see issue 27487.) Nick Coghlan says:
This next trap exists in all current versions of Python, including 3.3, and can be summed up in the following general guideline: "Never add a package directory, or any directory inside a package, directly to the Python path".
Note that while we violate that rule here, we do it only when the file being loaded is not being loaded as part of a package, and our modification is specifically designed to allow us to access other files in that package. (And, as I noted, we probably shouldn't do this at all for single level packages.) If we wanted to be extra-clean, we might rewrite this as, e.g.:
import os, sys
_i = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
if _i not in sys.path:
sys.path.insert(0, _i)
else:
_i = None
from sub.fileA import f1, f2
from sub.fileB import Class3
if _i:
sys.path.remove(_i)
del _i
That is, we modify sys.path long enough to achieve our imports, then put it back the way it was (deleting one copy of _i if and only if we added one copy of _i).
Here is one solution that I would not recommend, but might be useful in some situations where modules were simply not generated:
import os
import sys
parent_dir_name = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.append(parent_dir_name + "/your_dir")
import your_script
your_script.a_function()
#BrenBarn's answer says it all, but if you're like me it might take a while to understand. Here's my case and how #BrenBarn's answer applies to it, perhaps it will help you.
The case
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleA.py
Using our familiar example, and add to it that moduleX.py has a relative import to ..moduleA. Given that I tried writing a test script in the subpackage1 directory that imported moduleX, but then got the dreaded error described by the OP.
Solution
Move test script to the same level as package and import package.subpackage1.moduleX
Explanation
As explained, relative imports are made relative to the current name. When my test script imports moduleX from the same directory, then module name inside moduleX is moduleX. When it encounters a relative import the interpreter can't back up the package hierarchy because it's already at the top
When I import moduleX from above, then name inside moduleX is package.subpackage1.moduleX and the relative import can be found
Following up on what Lars has suggested I've wrapped this approach in an experimental, new import library: ultraimport
It gives the programmer more control over imports and it allows file system based imports. Therefore, you can do relative imports from scripts. Parent package not necessary. ultraimports will always work, no matter how you run your code or what is your current working directory because ultraimport makes imports unambiguous. You don't need to change sys.path and also you don't need a try/except block to sometimes do relative imports and sometimes absolute.
You would then write in somefile.py something like:
import ultraimport
foo = ultraimport('__dir__/foo.py')
__dir__ is the directory of somefile.py, the caller of ultraimport(). foo.py would live in the same directory as somefile.py.
One caveat when importing scripts like this is if they contain further relative imports. ultraimport has a builtin preprocessor to rewrite subsequent relative imports to ultraimports so they continue to work. Though, this is currently somewhat limited as original Python imports are ambiguous and there's only so much you can do about it.
I had a similar problem where I didn't want to change the Python module search
path and needed to load a module relatively from a script (in spite of "scripts can't import relative with all" as BrenBarn explained nicely above).
So I used the following hack. Unfortunately, it relies on the imp module that
became deprecated since version 3.4 to be dropped in favour of importlib.
(Is this possible with importlib, too? I don't know.) Still, the hack works for now.
Example for accessing members of moduleX in subpackage1 from a script residing in the subpackage2 folder:
#!/usr/bin/env python3
import inspect
import imp
import os
def get_script_dir(follow_symlinks=True):
"""
Return directory of code defining this very function.
Should work from a module as well as from a script.
"""
script_path = inspect.getabsfile(get_script_dir)
if follow_symlinks:
script_path = os.path.realpath(script_path)
return os.path.dirname(script_path)
# loading the module (hack, relying on deprecated imp-module)
PARENT_PATH = os.path.dirname(get_script_dir())
(x_file, x_path, x_desc) = imp.find_module('moduleX', [PARENT_PATH+'/'+'subpackage1'])
module_x = imp.load_module('subpackage1.moduleX', x_file, x_path, x_desc)
# importing a function and a value
function = module_x.my_function
VALUE = module_x.MY_CONST
A cleaner approach seems to be to modify the sys.path used for loading modules as mentioned by Federico.
#!/usr/bin/env python3
if __name__ == '__main__' and __package__ is None:
from os import sys, path
# __file__ should be defined in this case
PARENT_DIR = path.dirname(path.dirname(path.abspath(__file__)))
sys.path.append(PARENT_DIR)
from subpackage1.moduleX import *
__name__ changes depending on whether the code in question is run in the global namespace or as part of an imported module.
If the code is not running in the global space, __name__ will be the name of the module. If it is running in global namespace -- for example, if you type it into a console, or run the module as a script using python.exe yourscriptnamehere.py then __name__ becomes "__main__".
You'll see a lot of python code with if __name__ == '__main__' is used to test whether the code is being run from the global namespace – that allows you to have a module that doubles as a script.
Did you try to do these imports from the console?
Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.
Wrote a little python package to PyPi that might help viewers of this question. The package acts as workaround if one wishes to be able to run python files containing imports containing upper level packages from within a package / project without being directly in the importing file's directory. https://pypi.org/project/import-anywhere/
In most cases when I see the ValueError: attempted relative import beyond top-level package and pull my hair out, the solution is as follows:
You need to step one level higher in the file hierarchy!
#dir/package/module1/foo.py
#dir/package/module2/bar.py
from ..module1 import foo
Importing bar.py when interpreter is started in dir/package/ will result in error despite the import process never going beyond your current directory.
Importing bar.py when interpreter is started in dir/ will succeed.
Similarly for unit tests:
python3 -m unittest discover --start-directory=. successfully works from dir/, but not from dir/package/.

Problem importing python file from folder above

I know theres heaps of questions and answers for this, I tried multitude stackoverflow links but none of these seem to help.
My project structure is:
volume_price_analysis/
README.md
TODO.md
build/
docs/
requirements.txt
setup.py
vpa/
__init__.py
database_worker.py
utils.py
test/
__init__.py
test_utils.py
input/
input_file.txt
I want to load utils.py inside test_utils.py
my test_utils.py is:
import unittest
import logging
import os
from .vpa import utils
class TestUtils(unittest.TestCase):
def test_read_file(self):
input_dir = os.path.join(os.path.join(os.getcwd()+"/test/input"))
file_name = "input_file.txt"
with open(os.path.join(input_dir+"/"+file_name)) as f:
file_contents = f.read()
f.close()
self.assertEqual(file_contents, "Hello World!\n")
if __name__ == '__main__':
unittest.main()
I want to run (say inside test folder):
python3 -m test_utils.py
I can not do that, I get a bunch of errors regarding imports of utils (tried many iterations of . , no ., from this import that etc.. etc..
Why is this so bloody complicated?
I am using Python 3.7 if that helps.
As per this answer, you can do it using importlib,
in spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py") ,instead of path/to/file, you can use ../utils.py. Also, since you are already importing a package named utils (from importlib), you should call one of them by other name, ie. dont keep module.name as utils or import importlib.utils as something else.
I figured it out, turns out python prefers you to run your code from top level folder, in my case volume_price_analysis folder, all I had to do was make a shell script that calls
python3 -m unittest vpa.test.test_utils
And inside test_utils I can import whatever I want as long as I remember that I am executing the code from main folder so loading utils.py would be
from vpa import utils inside test_utils

No module named xxxx. How to import relative path?

I have created a simplified version as to focus solely on getting the relative path to work. This is my file structure:
|
-project
|-package1
| |--page
| |-__init__
|
|-package2
|-test
|-__init__
I am trying to import page into test. However, I get the error that package1 is not a module. Below I have typed all that are in my code. Very simple. I am just trying to import page into test. Is there anything I am missing (file or page set up) that is preventing me from importing?
page.py
one='half'
two='ling'
tests.py
import os
import sys
three = (one+two)
print(three)
Have you tried "from package1 import page" into your test.py? Or "from package1.page import page"?
UPDATE
When import something, Python Interpreter search in the following places:
Built-ins
Current Directory
$PYTHONPATH, environment variable
some other directory related to the installation
The last three make up to be sys.path.
In your case, to import package1 into some script in package2, there's 2 ways:
Add project path into PYTHONPATH.
Dynamically append project path into sys.path
I guess you would appreciate the latter solution, just add
import sys
sys.path.append('..')
in front of everything and it will work.
Plus: It's kind of inconvenient to use the module not inside the current directory though. I've seen only a few actual python project, and What I've seen is some of them use a single main.py in the project root to run the whole project, including test-cases. Maybe this structure is more recommended.
Hope it helps~
Original Answer:
This dir structure works fine on my computer with:
import package.page as page
page.foo() # a function in page
Could I have a guess: your current working directory may be not under your project directory.
To check, test about this:
import os
print(os.getcwd())
If the output is not your current directory, that's my case. I used to mess with this before.
To avoid this, you can:
cd to your directory before running Python
run os.chdir(...) in your code, which is to change your working directory.
If not this case, please provide more information.

Importing submodules

I am new to python and i m having a really bad time to overcome a problem with the importing system.
Lets say i have the file system presented below:
/src
/src/main.py
/src/submodules/
/src/submodules/submodule.py
/src/submodules/subsubmodules
/src/submodules/subsubmodules/subsubmodule.py
All the folders (src, submodules, subsubmodules) have and empty __init__.py file.
In submodule.py i have:
from subsubmodules import subsubmodule
In main.py i have:
from submodules import submodule
When i run submodule.py python accepts the import. But when i run main.py python raises error for the import of subsubmodule.py because /src/submodules/subsubmodules/ folder is not in the path.
Only solution is to change the import of submodule.py to
from submodules.subsubmodules import subsubmodule
This seems to me as an awful solution because after that i cannot run submodule.py and i m sure that something else is the key to that.
An other solution is to add the following code to the __init__.py file:
import os
import sys
import inspect
cmd_subfolder = os.path.split(inspect.getfile(inspect.currentframe()))[0]
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
Is there any way to do this using just the importing system of python and not other methods that do it manually using, for example sys.path or other modules like os, inspect etc..?
How can i import modules without caring about the modules they import?
You can run subsubmodule.py as
python3 -m submodule.subsubmodules.subsubmodule
If you want a shorter way to invoke it, you're free to add a shell or Python script for that on the top level of your package.
This is how imports work in Python 3; there are reasons for that.
You can avoid this issue by using sys.path in your program.
sys.path.insert(0, './lib')
import subsubmodule
For this code, you can put all your imports to a lib folder.
You can read the official documentation on Python packages where this is explained in depth.

Import a module from both within same package and from outside the package in Python 3

Okay, the scenario is very simple. I have this file structure:
.
├── interface.py
├── pkg
│   ├── __init__.py
│   ├── mod1.py
│   ├── mod2.py
Now, these are my conditions:
mod2 needs to import mod1.
both interface.py and mod2 needs to be run independently as a main script. If you want, think interface as the actual program and mod2 as an internal tester of the package.
So, in Python 2 I would simply do import mod1 inside mod2.py and both python2 mod2.py and python2 interface.py would work as expected.
However, and this is the part I less understand, using Python 3.5.2, if I do import mod1; then I can do python3 mod2.py, but python3 interface.py throws: ImportError: No module named 'mod1' :(
So, apparently, python 3 proposes to use import pkg.mod1 to avoid collisions against built-in modules. Ok, If I use that I can do python3 interface.py; but then I can't python3 mod2.py because: ImportError: No module named 'pkg'
Similarly, If I use relative import:
from . import mod1 then python3 interface.py works; but mod2.py says SystemError: Parent module '' not loaded, cannot perform relative import :( :(
The only "solution", I've found is to go up one folder and do python -m pkg.mod2 and then it works. But do we have to be adding the package prefix pkg to every import to other modules within that package? Even more, to run any scripts inside the package, do I have to remember to go one folder up and use the -m switch? That's the only way to go??
I'm confused. This scenario was pretty straightforward with python 2, but looks awkward in python 3.
UPDATE: I have upload those files with the (referred as "solution" above) working source code here: https://gitlab.com/Akronix/test_python3_packages. Note that I still don't like it, and looks much uglier than the python2 solution.
Related SO questions I've already read:
Python -- import the package in a module that is inside the same package
How to do relative imports in Python?
Absolute import module in same package
Related links:
https://docs.python.org/3.5/tutorial/modules.html
https://www.python.org/dev/peps/pep-0328/
https://www.python.org/dev/peps/pep-0366/
TLDR:
Run your code with python -m pkg.mod2.
Import your code with from . import mod1.
The only "solution", I've found is to go up one folder and do python -m pkg.mod2 and then it works.
Using the -m switch is indeed the "only" solution - it was already the only solution before. The old behaviour simply only ever worked out of sheer luck; it could be broken without even modifying your code.
Going "one folder up" merely adds your package to the search path. Installing your package or modifying the search path works as well. See below for details.
But do we have to be adding the package prefix pkg to every import to other modules within that package?
You must have a reference to your package - otherwise it is ambiguous which module you want. The package reference can be either absolute or relative.
A relative import is usually what you want. It saves writing pkg explicitly, making it easier to refactor and move modules.
# inside mod1.py
# import mod2 - this is wrong! It can pull in an arbitrary mod2 module
# these are correct, they uniquely identify the module
import pkg.mod2
from pkg import mod2
from . import mod2
from .mod2 import foo # if pkg.mod2.foo exists
Note that you can always use <import> as <name> to bind your import to a different name. For example, import pkg.mod2 as mod2 lets you work with just the module name.
Even more, to run any scripts inside the package, do I have to remember to go one folder up and use the -m switch? That's the only way to go??
If your package is properly installed, you can use the -m switch from anywhere. For example, you can always use python3 -m json.tool.
echo '{"json":"obj"}' | python -m json.tool
If your package is not installed (yet), you can set PYTHONPATH to its base directory. This includes your package in the search path, and allows the -m switch to find it properly.
If you are in the executable's directory, you can execute export PYTHONPATH="$(pwd)/.." to quickly mount the package for import.
I'm confused. This scenario was pretty straightforward with python 2, but looks awkward in python 3.
This scenario was basically broken in python 2. While it was straightforward in many cases, it was difficult or outright impossible to fix in any other cases.
The new behaviour is more awkward in the straightforward case, but robust and reliable in any case.
I had similar problem.
I solved it adding
import sys
sys.path.insert(0,".package_name")
into the __init__.py file in the package folder.

Resources