According to PEP 632, distutils will be formally marked as deprecated, and in Python 3.12, it will be removed. My product is soon going to support Python 3.10 and I don't want to put up with deprecation warnings, so I would like to remove references to distutils now. The problem is that I can't find good, comprehensive documentation that systematically lets me know that A in distutils can be replaced by B in modules C, D, and E. The Migration Advice in the PEP is surprisingly sketchy, and I haven't found standard documentation for distutils, or for whatever modules (such as setuptools?) that are required to replace distutils, that would let me fill in the gaps. Nor am I sure how to look at the content of the installed standard distribution (that is, the physical directories and files) in order to answer these questions for myself.
The "Migration Advice" section says:
For these modules or types, setuptools is the best substitute:
distutils.ccompiler
distutils.cmd.Command
distutils.command
distutils.config
distutils.core.Distribution
distutils.errors
...
For these modules or functions, use the standard library module shown:
...
distutils.util.get_platform — use the platform module
Presumably, that means that setuptools has either a drop-in replacement or something close to it for these modules or types (though I'm not sure how to verify that). So, for instance, perhaps setuptools.command.build_py can replace distutils.command.build_py. Is that correct? In any case, what about these?
distutils.core.setup
distutils.core.Extension
Furthermore, what am I supposed to make of the fact that setuptools does not appear under the modules or index list in the standard documentation? It is part of the standard distribution, right? I do see it under Lib/site-packages.
UPDATE 1: If setuptools is not currently part of the standard distribution, is it expected to become one, say, in Python 3.11 or 3.12? Are customers expected to install it (via pip?) before they can run a setup.py script that imports setuptools? Or is the thought that people shouldn't be running setup.py anymore at all?
Knowing how to replace distutils.core.setup and distutils.core.Extension is probably enough for my current needs, but answers to the other questions I've asked would be quite helpful.
UPDATE 2:
setuptools is indeed part of the python.org standard distribution, as can be determined by importing it from a freshly installed Python interpreter from python.org. The thing that was confusing me was that it is documented on a separate site, with a separate style, from python.org. However, as SuperStormer pointed out in the comments, some Linux distributions, such as Debian, don't install it by default when you install Python.
UPDATE 3:
This command:
Python-3.9.1/python -m ensurepip --default-pip
installs both pip and setuptools on Debian 10 on a Python installation freshly downloaded and built from python.org. Before the command is executed, pip is absent and setuptools cannot be imported.
How do I setup a Python virtual environment with the FreeCAD library embedded as to enable import as a module into scripts?
I would like to avoid using the FreeCAD GUI as well as being dependent on having FreeCAD installed on the system, when working with Python scripts that use FreeCAD libraries to create and modify 3D geometry. I hope a Python virtual environment can make that possible.
I am working in PyCharm 2021.1.1 with Python 3.8 in virtualenv on Debian 10.
I started out with FreeCAD documentation for embedding in scripts as a basis:
https://wiki.freecadweb.org/Embedding_FreeCAD
In one attempt, I downloaded and unpacked .deb packages from Debian, taking care to get the correct versions required by each dependency. In another attempt, I copied the contents of a FreeCAD flatpak install, as it should contain all the libraries that FreeCAD depends on.
After placing the libraries to be imported in the virtual maching folder, I have pointed to them with sys.path.append() as well as PyCharm's Project Structure tool in various attempts. In both cases the virtual environment detects where FreeCAD.so is located, but fails to find any of its dependencies, even when located in the same folder. When importing these dependencies explicitly, each of them have the same issue. This leads to a dead end when an import fails because it does not define a module export function according to Python:
ImportError: dynamic module does not define module export function (PyInit_libnghttp2)
I seem to be looking at a very long chain of broken dependencies, even though I make the required libraries available and inform Python where they are located.
I would appreciate either straight up instructions for how to do this or pointers to documentation that describes importing FreeCAD libraries in Python virtual environments, as I have not come across anything that specific yet.
I came across a few prior questions which seemed to have similar intent, but no answers:
Embedding FreeCAD in python script
Is it possible to embed Blender/Freecad in a python program?
Similar questions for Conda focus on importing libraries from the host system rather than embedding them in the virtual environment:
Incude FreeCAD in system path for just one conda virtual environment
Other people's questions on FreeCAD forums went unanswered:
https://forum.freecadweb.org/viewtopic.php?t=27929
EDIT:
Figuring this out was a great learning experience. The problem with piecing dependencies together is that for that approach to work out, everything from the FreeCAD and its dependencies to the Python interpreter and its dependencies seems to need to be built on the same versions of the libraries that they depend on to avoid causing segmentation faults that brings everything to a crashing halt. This means that the idea of grabbing FreeCAD modules and libraries it depends on from a Flatpak installation is in theory not horrible, as all parts are built together using the same library versions. I just couldn't make it work out, presumably due to how the included libraries are located and difficulty identifying an executable for the included Python interpreter. In the end, I looked into the contents of the FreeCAD AppImage, and that turned out to have everything needed in a folder structure that appears to be very friendly to what PyCharm and Python expects from modules and libraries.
This is what I did to get FreeCAD to work with PyCharm and virtualenv:
Download FreeCAD AppImage
https://www.freecadweb.org/downloads.php
Make AppImage executable
chmod -v +x ~/Downloads/FreeCAD_*.AppImage
Create folder for extracting AppImage
mkdir -v ~/Documents/freecad_appimage
Extract AppImage from folder (note: this expands to close to 30000 files requiring in excess of 2 GB disk space)
cd ~/Documents/freecad_appimage
~/Downloads/./FreeCAD_*.AppImage --appimage-extract
Create folder for PyCharm project
mkdir -v ~/Documents/pycharm_freecad_project
Create pycharm project using Python interpreter from extracted AppImage
Location: ~/Documents/pycharm_freecad_project
New environment using: Virtualenv
Location: ~/Documents/pycharm_freecad_project/venv
Base interpreter: ~/Documents/freecad_appimage/squashfs-root/usr/bin/python
Inherit global site-packages: False
Make available to all projects: False
Add folder containing FreeCAD.so library as Content Root to PyCharm Project Structure and mark as Sources (by doing so, you shouldn't have to set PYTHONPATH or sys.path values, as PyCharm provides module location information to the interpreter)
File: Settings: Project: Project Structure: Add Content Root
~/Documents/freecad_appimage/squashfs-root/usr/lib
After this PyCharm is busy indexing files for a while.
Open Python Console in PyCharm and run command to check basic functioning
import FreeCAD
Create python script with example functionality
import FreeCAD
vec = FreeCAD.Base.Vector(0, 0, 0)
print(vec)
Run script
Debug script
All FreeCAD functionality I have used in my scripts so far has worked. However, one kink seems to be that the FreeCAD module needs to be imported before the Path module. Otherwise the Python interpreter exits with code 139 (interrupted by signal 11: SIGSEGV).
There are a couple of issues with PyCharm: It is showing a red squiggly line under the import name and claiming that an error has happened because "No module named 'FreeCAD'", even though the script is running perfectly. Also, PyCharm fails to provide code completion in the code view, even though it manages to do so in it's Python Console. I am creating new questions to address those issues and will update info here if I find a solution.
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite.
In my code I am using nltk which has a import sqlite3 in one of the files.
Steps taken till now:
Deployment package has only python modules that I am using in the root. I get the error:
Unable to import module 'my_program': No module named '_sqlite3'
Added the _sqlite3.so from /home/my_username/anaconda2/envs/py3k/lib/python3.6/lib-dynload/_sqlite3.so into package root. Then my error changed to:
Unable to import module 'my_program': dynamic module does not define module export function (PyInit__sqlite3)
Added the SQLite precompiled binaries from sqlite.org to the root of my package but I still get the error as point #2.
My setup: Ubuntu 16.04, python3 virtual env
AWS lambda env: python3
How can I fix this problem?
Depending on what you're doing with NLTK, I may have found a solution.
The base nltk module imports a lot of dependencies, many of which are not used by substantial portions of its feature set. In my use case, I'm only using the nltk.sent_tokenize, which carries no functional dependency on sqlite3 even though sqlite3 gets imported as a dependency.
I was able to get my code working on AWS Lambda by changing
import nltk
to
import imp
import sys
sys.modules["sqlite"] = imp.new_module("sqlite")
sys.modules["sqlite3.dbapi2"] = imp.new_module("sqlite.dbapi2")
import nltk
This dynamically creates empty modules for sqlite and sqlite.dbapi2. When nltk.corpus.reader.panlex_lite tries to import sqlite, it will get our empty module instead of the standard library version. That means the import will succeed, but it also means that when nltk tries to use the sqlite module it will fail.
If you're using any functionality that actually depends on sqlite, I'm afraid I can't help. But if you're trying to use other nltk functionality and just need to get around the lack of sqlite, this technique might work.
This is a bit of a hack, but I've gotten this working by dropping the _sqlite3.so file from Python 3.6 on CentOS 7 directly into the root of the project being deployed with Zappa to AWS. This should mean that if you can include _sqlite3.so directly into the root of your ZIP, it should work, so it can be imported by this line in cpython:
https://github.com/python/cpython/blob/3.6/Lib/sqlite3/dbapi2.py#L27
Not pretty, but it works. You can find a copy of _sqlite.so here:
https://github.com/Miserlou/lambda-packages/files/1425358/_sqlite3.so.zip
Good luck!
This isn't a solution, but I have an explanation why.
Python 3 has support for sqlite in the standard library (stable to the point of pip knowing and not allowing installation of pysqlite). However, this library requires the sqlite developer tools (C libs) to be on the machine at runtime. Amazon's linux AMI does not have these installed by default, which is what AWS Lambda runs on (naked ami instances). I'm not sure if this means that sqlite support isn't installed or just won't work until the libraries are added, though, because I tested things in the wrong order.
Python 2 does not support sqlite in the standard library, you have to use a third party lib like pysqlite to get that support. This means that the binaries can be built more easily without depending on the machine state or path variables.
My suggestion, which you've already done I see, is to just run that function in python 2.7 if you can (and make your unit testing just that much harder :/).
Because of the limitations (it being something baked into python's base libs in 3) it is more difficult to create a lambda-friendly deployment package. The only thing I can suggest is to either petition AWS to add that support to lambda or (if you can get away without actually using the sqlite pieces in nltk) copying anaconda by putting blank libraries that have the proper methods and attributes but don't actually do anything.
If you're curious about the latter, check out any of the fake/_sqlite3 files in an anaconda install. The idea is only to avoid import errors.
As apathyman describes, there isn't a direct solution to this until Amazon bundle the C libraries required for sqlite3 into the AMI's used to run Python on lambda.
One workaround though, is using a pure Python implementation of SQLite, such as PyDbLite. This side-steps the problem, as a library like this doesn't require any particular C libraries to be installed, just Python.
Unfortunately, this doesn't help you if you are using a library which in turn uses the sqlite3 module.
My solution may or may not apply to you (as it depends on Python 3.5), but hopefully it may shed some light for similar issue.
sqlite3 comes with standard library, but is not built with the python3.6 that AWS use, with the reason explained by apathyman and other answers.
The quick hack is to include the share object .so into your lambda package:
find ~ -name _sqlite3.so
In my case:
/home/user/anaconda3/pkgs/python-3.5.2-0/lib/python3.5/lib-dynload/_sqlite3.so
However, that is not totally sufficient. You will get:
ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
Because the _sqlite3.so is built with python3.5, it also requires python3.5 share object. You will also need that in your package deployment:
find ~ -name libpython3.5m.so*
In my case:
/home/user/anaconda3/pkgs/python-3.5.2-0/lib/libpython3.5m.so.1.0
This solution is likely not work if you are using _sqlite3.so that is built with python3.6, because the libpython3.6 built by AWS will likely not support this. However, this is just my educational guess. If anyone has successfully done, please let me know.
TL;DR
Although not a very good approach but it works just fine. Use pickle module to dump whatever functionality you want in a separate file using you Local System. Export that file to AWS Project Directory, load functionality from file and use it.
In Long
So, here's what I tried, I had this same problem when I was trying to import stopwords from nltk.corpus but AWS doesn't let me import it, even installing it wasn't seems to be possible for me on Amazon AMI because of the same error _sqlite3 module not found. So to do that, what I tried was, using my local system I generated a file and dump stopwords into it. Here's the code
# Only for Local System
from nltk.corpus import stopwords
import pickle
stop = list(stopwords.words('English'))
with open('stopwords.pkl', 'wb') as f:
pickle.dump(stop, f)
Now, I exported this file to AWS Directory using winscp and then used that functionality by loading it from the file on my project.
import pickle
# loading trained model
with open('stopwords.pkl', 'rb') as f:
stop = pickle.load(f)
and it works just fine
From AusIV's answer, This version works for me in AWS Lambda and NLTK, I created a dummysqllite file to mock the required references.
spec = importlib.util.spec_from_file_location("_sqlite3","/dummysqllite.py")
sys.modules["_sqlite3"] = importlib.util.module_from_spec(spec)
sys.modules["sqlite3"] = importlib.util.module_from_spec(spec)
sys.modules["sqlite3.dbapi2"] = importlib.util.module_from_spec(spec)
You need the sqlite3.so file (as others have pointed out), but the most robust way to get it is to pull from the (semi-official?) AWS Lambda docker images available in lambci/lambda. For example, for Python 3.7, here's an easy way to do this:
First, let's grab the sqlite3.so (library file) from the docker image:
mkdir lib
docker run -v $PWD:$PWD lambci/lambda:build-python3.7 bash -c "cp sqlite3.cpython*.so $PWD/lib/"
Next, we'll make a zipped executable with our requirements and code:
pip install -t output requirements.txt
pip install . -t output
zip -r output.zip output
Finally, we add the library file to our image:
cd lib && zip -r ../output.zip sqlite3.cpython*.so
If you want to use AWS SAM build/packaging, instead copy it into the top-level of the lambda environment package (i.e., next to your other python files).
I used a virtualbox with ubuntu-16, to install caffe in python 2.6. As I wanted to use py2exe, I needed to change python version to 3.6. When this was done, caffe import code stopped working. Here is the error message:
ImportError: libcaffe.so.1.0.0-rc5: cannot open shared object file: No such file
or directory
Do I have to rebuild caffe? Or is something else that need to be changed?
Here is the full image of the error:
Yes, you need to rebuild Caffe. The Python 2 and Python 3 libraries are not compatible. Not all of the file names are the same (different / added software organization). This rebuild requires that you place a single version of Python first in all search paths: make, compile, load, ...
When I need to do this, I check with my SysOps: there's always something I forget.
I wanted to use the code:
import geodict_lib
locations = geodict_lib.find_locations_in_text(text)
But there seems to be no installer for geodict_lib. How do I install this is Anaconda 3.0 Python 3?
I know this is a year on, but perhaps I could help others who stumble on this. You'll need to place the files in the directory for modules that your installation of Python is monitoring.
First, download the .zip file from GitHub here.
Once you've done that, you can run the following at the command line or terminal:
conda list
This will provide the path to all installed packages in your installation of Python. Move the geodict.zip file you downloaded to that location. You might want to run which python as well (see here) since you may have a few different installations to check for.
Now when you run python import geodict in Python it should run without trouble!