Packaging Multiple Python Files - python-3.x

I currently am using this guide to package up my project wasp. However currently everything lives inside of the wasp file.
That's not ideal. I would rather have all the classes in separate files so it can be more effectively managed. I have the series of files needed in the debian directory. But I'm not sure how to configure the packaging to package multiple files.
Is there a way to change my packaging to package more than just the one script file?

I'm not a debian package or Python expert, but one way would be to copy the various source files to another location (outside of /usr/bin), and then have /usr/bin/wasp call out to them.
Say you put all of your python code in src/ in the root of your repo. In the debian/install file, you'd have:
wasp usr/bin
src/* usr/lib/wasp/
You'd then just need /usr/bin/wasp to call some entry point in src. For example,
#!/usr/bin/python3
import sys
sys.path.append('/usr/lib/wasp/')
import wasp # or whatever you expose in src
# ...
Again, I don't know the best practices here (either in directory or python usage) but I think this would at least work!

Related

Relative imports within a git repo

I want to create a git repo that can be used like this:
git clone $PROJECT_URL my_project
cd my_project
python some_dir/some_script.py
And I want some_dir/some_script.py to import from another_dir/some_module.py.
How can I accomplish this?
Some desired requirements, in order of decreasing importance to me:
No sys.path modifications from within any of the .py files. This leads to fragility when doing IDE-powered automated refactoring.
No directory structure changes. The repo has been thoughtfully structured.
No changes to my environment. I don't want to add a hard-coded path to my $PYTHONPATH for instance, as that can result in unexpected behavior when I cd to other directories and launch unrelated python commands.
Minimal changes to the sequence of 3 commands above. I don't want a complicated workflow, I want to use tab-completion for some_dir/some_script.py, and I don't want to spend keystrokes on extra python cmdline flags.
I see four solutions to my general problem described here, but none of them meet all of the above requirements.
If no solution is possible, then why are things this way? This seems like such a natural want, and the requirements I list seem perfectly reasonable. I'm aware of a religious argument in a 2007 email from Guido:
I'm -1 on this and on any other proposed twiddlings of the __main__
machinery. The only use case seems to be running scripts that happen
to be living inside a module's directory, which I've always seen as an
antipattern. To make me change my mind you'd have to convince me that
it isn't.
But not sure if things have changed since then.
Opinions haven't changed on this topic since Guido's 2007 comment. If anything, we're moving even further in the opposite direction, with the additions of PYTHONSAFEPATH var and corresponding -P option in 3.11:
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSAFEPATH
https://docs.python.org/3/using/cmdline.html#cmdoption-P
These options will nerf direct sibling module imports too, requiring sys.path to be explicitly configured even for scripts!
So, scripts still can't easily do relative imports, and executable scripts living within a package structure are still considered an anti-pattern. What to do instead?! The widely accepted alternative here is to use the packaging feature of entry-points. One type of entry-point group in packaging metadata is the "console_scripts" group, used to point to arbitrary callables defined within your package code. If you add entries in this group within your package metadata, then script wrappers for those callables will be auto-generated and put somewhere on $PATH at pip install time). No hacking of sys.path necessary.
That being said, it's still possible to run .py files directly as scripts, provided you've configured the underlying Python environment for them to resolve their dependencies (imports) correctly. To do that, you'll want to define a package structure and "install" the package so that your source code is visible on sys.path.
Here's a minimum example:
my_project
├── another_dir
│ ├── __init__.py <-- __init__ file required for package dirs (it can be empty)
│ └── some_module.py
├── pyproject.toml <-- packaging metadata lives here
└── some_dir <-- no __init__ file necessary for non-packaged subdirs
└── some_script.py
Minimal contents of the packaging definition in pyproject.toml:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "my_proj"
version = "0.1"
[tool.setuptools.packages.find]
namespaces = false
An additional once-off step is required to create/configure an environment in between the git clone and the script execution:
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
This makes sure that another_dir is available to import from the environment's site-packages directory, which is already one of the locations on sys.path (check with python -m site). That's what's required for any/all of these import statements to work from within the script file(s)
from another_dir import some_module
import another_dir.some_module
from another_dir.some_module import something
Note that this does not necessarily put the parent of another_dir onto sys.path directly. For an editable install, it will setup some scaffolding which makes your package appear to be "installed" in the site, which is sufficient for those imports to succeed. For a non-editable install (pip install without the -e flag), it will just copy your package directly into the site, compile the .pyc files, and then the code will be found by the normal SourceFileLoader.

import so file from different folder

I am using ubuntu 20.04 and python3. I want to import so file "ext.so" like this:
impot Ext
from another code. But the so file is in different folder. What is the right way to do it?
What is the right way to do it?
your project should be structured like so:
-head
--sub1
---Ext.so
--sub2
---caller.py
you should have head the folder containing head in your pythonpath somehow (by installing the python module using distutils, or just having head as your working directory or added by modifying PYTHONPATH in .bashrc, or appending it to sys.path in your script), and you should use
from head.sub1 import Ext
granted that your .so file is a python extension and not some sort of dll, anyone installing your project should be able to run your code without any problems.
however, there is definitely nothing stopping you from adding sub1 to your pythonpath and just import Ext.
Edit: i am sorry, if head is in pythonpath, you only need to import from sub1, not head, so you should have the folder containing head in your pythonpath, my bad.

Copy non python files via package_data to Scripts directory

I have some scripts in my package, that rely on some template xml files.
Those scripts are callable by entry points and I wanted to reference the template files by a relative path.
When calling the script via python -m ... the scripts themselves are called from within lib\site-packages and there the xml files are available as I put them in my setup.py like this:
setup(
...
packages=['my_pck'],
package_dir={'my_pck': 'python/src/my_pck'},
package_data={'my_pck': ['reports/templates/*.xml']},
...
)
I know, I could copy those templates also by using data_files in my setup.py but using package_data seems better to me.
Unfortunately package_data seems not to copy those files to the Scripts folder where the entry points are located.
So my question is, is this even achievable via package_data and if, how?
Or is there a more pythonic, easier way to achieve this? Maybe not referencing those files via paths relative to the scripts?
Looks like importlib-resources might help here. This library is able to find the actual path to a resource file packaged as package_data by setuptools.
Access the package_data files from your code with something like this:
with importlib_resources.path('my_pck.reports.templates', 'a.xml') as xml_path:
do_something(xml_path)

libffi-d78936b1.so.6.0.4: cannot open shared object file Error on AWS Lambda function

I am trying to deploy a python Lambda package with watson_developer_cloud sdk. Cryptography is one of many dependencies this package have. I have build this package on Linux machine. My package includes .libffi-d78936b1.so.6.0.4 hidden file too. But it is still not accessible to my lambda function. I am still getting 'libffi-d78936b1.so.6.0.4: cannot open shared object file' Error.
I have built my packages on Vagrant server, using instructions from here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
Exact error:
Unable to import module 'test_translation': libffi-d78936b1.so.6.0.4: cannot open shared object file: No such file or directory
On a note, as explained in this solution, I have already created my package using zip -r9 $DIR/lambda_function.zip . instead of *. But it is still not working for me.
Any direction is highly appreciable.
The libffi-d78936b1.so.6.0.4 is in a hidden folder named .libs_cffi_backend.
So to add this hidden folder in your lambda zip, you should do something like:
zip -r ../lambda_function.zip * .[^.]*
That will create a zip file in the directory above with the name lambda_function.zip, containing all files in the current directory (first *) and every thing starting with .* but not ..* ([^.])
In a situation like this, I would invest some time setting up a local SAM environment so you can:
1 - Debug your Lambda
2 - Check what is being packaged and the files hierarchy
https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html
Alternatively you can remove this import and instrument your lambda function to print some of the files and directories it "sees".
I strongly recommend you giving SAM a try though, since it will make not only this debugging way easier but any further test you need to perform down the road. Lambdas are tricky to debug.
A little late, and I would comment on Frank's answer but not enough reputation.
I was including the the hidden directory .libs_cffi_backend in my deployment package, but for some reason Lambda could not find the libffi-d78936b1.so.6.0.4 file located within.
After copying this file into the same 'root' level directory as my lambda handler it was able to load the dependency and execute.
Also, make sure all the files in the deployment package are readable chmod -R 644 .

Making an Executable out of an entire Python Project

Is there any way I can make an executable out of my Python project? There are many Python scripts that are in my Project and there are SQLite db files as well as other files and folders that are required for the software to run correctly. What is the best way of making this entire project executable?, Should I only make the Python scripts executable?
I have tried Pyinstaller but I am not sure how to bundle all the files into 1 single executable. Shown above is a copy of all the files and folders in my directory.
I think you need to modify the spec file, which PyInstaller creates on a first run. There is a special parameter for data files:
binaries: non-python modules needed by the scripts, including names given by the --add-binary option;
Try adding your database and other data files to this field and they should be included to you package.
For further question I recommend to refer to official documentation and check examples on Github

Resources