Setup
I have the following tree structure in my project:
Cineaste/
├── cineaste/
│ ├── __init__.py
│ ├── metadata_errors.py
│ ├── metadata.py
│ └── tests/
│ └── __init__.py
├── docs/
├── LICENSE
├── README.md
└── setup.py
metadata.py imports metadata_errors.py with the expression:
from .metadata_errors.py import *
Thus setting a relative path to the module in the same directory (notice the dot prefix).
I can run metadata.py in the PyCharm 2016 editor just fine with the following configuration:
Problem
However, with this configuration I cannot debug metadata.py. PyCharm returns the following error message (partial stack trace):
from .metadata_errors import *
SystemError: Parent module '' not loaded, cannot perform relative import
PyCharm debugger is being called like so:
/home/myself/.pyenv/versions/cineaste/bin/python /home/myself/bin/pycharm-2016.1.3/helpers/pydev/pydevd.py --multiproc --module --qt-support --client 127.0.0.1 --port 52790 --file cineaste.metadata
Question
How can I setup this project so that PyCharm is able to run and debug a file that makes relative imports?
Today (PyCharm 2018.3) it is really easy but not obvious.
You can choose target to run: script name or module name by pressing the label "Script Path" in edit configuration window:
One of possible solutions could be to run your module through intermediate script which you'll run in debug mode.
E.g. test_runner.py:
import runpy
runpy.run_module('cineaste.metadata')
You might also try removing the last node (/cineaste) from the Working Directory. This configuration works (run and debug) for me (in Pycharm: 2017.2.2)
I would suggest not using * since that can cause many problems in the future, two classes or methods being named the same etc.
Related
I recently created a Snakemake profile using the guide at Snakemake-Profiles/slurm. I was able to get the profile installed successfully, and it does work when calling the path directly. However, when using the profile name, such as
snakemake --profile slurm --dry-run
I get the error:
Error: profile given but no config.yaml found. Profile has to be given as
either absolute path, relative path or name of a directory available in either
/etc/xdg/snakemake or /home/GROUP/USERNAME/.config/snakemake.
I have indeed installed the profile under ~/.config/snakemake. Here is the tree of this directory:
/home/GROUP/USERNAME/.config/snakemake
.
└── slurm
├── cluster_config.yaml
├── config.yaml
├── CookieCutter.py
├── __pycache__
│ ├── CookieCutter.cpython-39.pyc
│ └── slurm_utils.cpython-39.pyc
├── settings.json
├── slurm-jobscript.sh
├── slurm-status.py
├── slurm-submit.py
└── slurm_utils.py
2 directories, 10 files
I can continue to specify the path to this profile when running Snakemake, but it would be useful to simply give it the name of the profile. Does anyone happen to know why Snakemake doesn't seem to be picking up that the profile slurm exists?
I've solved my issue by installing Snakemake in a Conda environment, and re-installing the profile. I'm not sure if it was the Conda environment or the profile re-install that fixed my issue.
I've been working a on a CRUD SAM application using a Python 3.8 runtime. My lambda functions reference a lambda layer that has code shared amongst the functions. My application builds/deploys and I can invoke the functions locally, however, in composing unit tests (using pytest), I'm not sure how to get around my imports referencing a layer in line that doesn't match the file structure.
File structure:
.
├── template.yaml
├── _layer-folder
│ └── _python
│ └── shared_code.py
├── _lambda
│ ├── some_function.py
│ └── _tests
│ └── test-some-function.py
When running my tests for my lambda functions, I get an import error when I reference a module in that lives in the shared layer. For example:
from some_module_in_a_layer import some_layer_function
Is there a way to configure pytest to reference the correct file directory when running the tests?
I ended up resolving this by appending to the system path when testing or running locally within my __init__.py file.
if os.environ.get("ENVIRONMENT") == "test":
sys.path.append(os.getcwd() + '/path/to/layer')
It's a pain indeed to test layers properly and it kind of depends on your method of running the test (e.g. in Pycharm or using the terminal). In PyCharm you can add the layers directory as a source (right click use as source). To run from terminal you can add it to your PYTHONPATH before running, but it's quite ugly.
So PYTHONPATH='/path/to/layer' python main.py
It's not pretty but I don't know another way to fix that tbh.
I added this line in my test and it worked perfectly.
test_file.py
import pytest
sys.path.append(os.getcwd() + '/../layer_location')
Inside layer location my file system looks like this:
layer-location\
__init__.py
layer_code.py
The import in my non-test code is then:
code_under_test.py
from layer_code import foo
foo()
In my company, we have a Python project that contains a hierarchy of lots of packages and modules shared by our different applications. But what seemed a good idea to mutualise code has become something horribly difficult to use and maintain.
Depending on the end-project, we use a single module from this library, or a single package, or many. And some modules/packages are independent, but some others depend on other packages and modules from the same library. And of course those modules depend on third-party packages.
I would like to make it as modular as possible, i.e. I would like to deal with the following cases:
use the whole library
use a single package from that library (whether it is a top level package or not)
use a single module from the library
use multiple packages/modules from the library (possibly interdependent)
Moreover, a strong constraint I have is not to break existing code so that I can make the project transformation without breaking all the projects of my coworkers...
Here is an example file tree that represents the situation:
library
├── a
│ └── i
│ ├── alpha.py # each module may depend on any other package / module
│ └── beta.py
├── b
│ ├── delta.py
│ ├── gamma.py
│ └── j
│ └── epsilon.py
├── c
│ ├── mu.py
│ └── nu.py
├── requirements.txt
└── setup.py
The best solution I found is to add a setup.py and a requirements.txt in every folder of the tree. But this has serious limitations:
I cannot use a single module (I have to use a package).
When I use a package, I have to change its import statements. For example if, before any change, I use from library.a.i import alpha, I would like not to modify it afterwards.
Moreover, I am quite sure I am forgetting some of the constraints I have...
So is what I am trying to achieve feasible, or is it utopian?
What you can do is the following:
You need to have PYTHONPATH pointing on library or append it to sys.path
eg sys.path.insert(0, 'path_to_libray')
If you create __init__.py at each level of your folders you will be able to pick whatever level/module of interest eg:
in folder b :
from .delta import *
from .gamma import *
from b.j import *
in folder j:
from .epsilon import *
You can now do in any python script:
from b import * : will import all of b contained modules
from b.j import *: will import only epsilon stuff
I've established a private github organization for common python repos - each repo is basically a unique python 3rd party package (like numpy for example) but that are homegrown. These are to be used across different projects.
At the moment, the repos are just source packages, not compiled with wheels or sdist for releases - so each has a setup.py, and directory structure for the modules/business logic of the library. Basically the repos look somewhat like this: https://packaging.python.org/tutorials/packaging-projects/
At the moment, I don't want to address compiling releases or a private PyPI server. What I need help/guidance on is what if its not just a library, but also has a CLI tool (that uses the library).
I expect the user to one of several things: clone it, set PYTHONPATH/PATH accordingly, and use it, or package and pip install it. but should the CLI tool be included inside that repo, or outside? how does one call it (ie. python -m ).
Whats strange to me is that packages seem more geared towards true libraries and not libraries+tools. Any help in my thought process on this and how to invoke?
Thanks to #phd for helping me walk the dog.
For my package project, I define a setup.py (surrogate makefile in python's parlance) which defines this entry point:
setuptools.setup(
name="pkg_name", # Replace with your package name
version="0.0.1", # see pep 440
...
scripts=['bin/simple_cli.py'], # callable script to register (updates PATH)
...
)
Now in the package project itself, basic structure is as follows and I'll highlight the bin/ directory:
$ tree -L 3
.
├── bin
│ └── simple_cli.py
├── contributing.md
├── LICENSE
├── makefile
├── pkg_name
│ ├── example_module.py
│ ├── __init__.py
├── README.md
├── requirements.txt
├── setup.py
└── tests
├── test_main.py
Once this is built (sdist, wheels, etc), we can use pip install . I test this in a virtual environment and here is where simple_cli.py exists:
The comments above have some references, but end-state is the file is installed in the venv bin/ directory (which is available on PATH with an activated venv).
I have a Google Cloud Function that processes messages from a queue. When I put all the modules in the root of the function (where main.py and requirements.txt live) everything works fine. If I move the modules into a local package as shown here in the docs., then when I deploy the function by uploading the Zip file using the cloud console I get an error saying Build failed: Build error details not available with no other info.
.
├── main.py
├── module_one.py
├── module_two.py
└── requirements.txt
at the root of the archive I upload works just fine.
.
├── main.py
├── requirements.txt
├── local_package_one/
│ ├── __init__.py
│ └── module_one.py
└── local_package_two/
├── __init__.py
└── module_two.py
earns me Build failed: Build error details not available. In the second configuration, I update all the affected import statements. I initially suspected the requirements.txt file since it isn't shown in the example, but here they state that it should work just fine. The example there also shows a top-level folder named after the function, so I tried putting that at the root of the archive with everything inside of it and got the same results.
I changed the imports from
from module_one import MyClass
to
from local_package_one.module_one import MyClass
This could be a number of issues, but without more details it's hard to say.
This general pattern works though, e.g.:
$ tree
.
├── local_package_one
│ ├── __init__.py
│ └── module_one.py
├── local_package_two
│ ├── __init__.py
│ └── module_two.py
├── main.py
└── requirements.txt
$ cat main.py
from local_package_one.module_one import hello
from local_package_two.module_two import world
def test(request):
return hello + ' ' + world
$ cat local_package_one/module_one.py
hello = "hello"
$ cat local_package_two/module_two.py
world = "world"
$ gcloud functions deploy test --runtime python37 --trigger-http --allow-unauthenticated
Deploying function (may take a while - up to 2 minutes)...done.
$ curl https://<my-function>.cloudfunctions.net/test
hello world%