I have created a python package and would like to distribute it on pypi ( https://pypi.org/project/catapi.py/ ). My initial v0.1.1 upload worked without issue. I decided to add in a sub directory to store abstract classes because there was a lot of code reuse. Upon uploading this to pypi and installing, I get the message that the abc module does not exist.
I did some research and found that I must include the subdirectory in the MANIFEST.in file, so I did. Upon uploading and attempting an install again, I get the same error. I downloaded the package directly and extracted the files to find the abc directory does indeed exist. Next I checked the site-packages version of catapi only to find it does not have the abc module.
Has anyone encountered this and know how to fix this? Here's a script to show the issue
# make a temp dir to hold this in
mkdir catapi
cd catapi
# Prepare python venv
python -m venv env-catapi
source env-catapi/bin/activate
pip install catapi.py==0.3.4
# Download file for comparison
wget https://files.pythonhosted.org/packages/ac/ee/044c1cc53e7c994fe4a7d57362651da8adff54eb34680c66f62a1d4fb57d/catapi.py-0.3.4.tar.gz
tar -xvf catapi.py-0.3.4.tar.gz
diff catapi.py-0.3.4/catapi env-catapi/lib/python3.8/site-packages/catapi
deactivate
cd ../
# Prints out
# Only in catapi: abc
# Only in env-catapi/lib/python3.8/site-packages/catapi: __pycache__
It's necessary to add in the sub-directories into the
packages=['package1', 'package2', 'etc']
part of setup.py. In my case, I had to add in the abc directory to have it placed in the catapi install
packages=['catapi', 'catapi.abc'],
Related
I am new to AWS Lambda and I want to run code on Lambda for a machine learning API. The functions that I want to run on Lambda are, in summary, one to read some csv files to create a pandas dataFrame and search in it and the other to run some pickled machine learning models through requests from a Flask application. To do this, I need to import pandas, joblib and possibly scikit-learn which are compatible with Amazon Linux. I am using a Windows machine.
In general, I am going with the approach of using Lambda's layers by uploading zip files. Of course, since Lambda has a pre-built layer with SciPy and Numpy so I will not import them. If I import them, I will exceed Lambda's layer limit anyway.
To be more specific, I have done the following:
Downloaded and extracted linux-compatible versions of the libraries listed above. For example: From this link I have downloaded "pandas-0.25.0-cp35-cp35m-manylinux1_x86_64.whl" and unzipped to a folder.
The unzipped libraries are in the following directory:
lambda_layers\python\lib\python3.7\site-packages
They are zipped into a file and uploaded onto S3 Bucket for creating a layer.
I imported the packages:
import json
import boto3
import pandas as pd
I got the following error from Lambda:
{
"errorMessage": "Unable to import module 'lambda_function': C extension: No module named 'pandas._libs.tslibs.conversion' not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first.",
"errorType": "Runtime.ImportModuleError"
}
Folder structure should be standard, you can also use Docker to create the zipped Linux compatible library and upload it in AWS Lambda layers. Below are the tested commands to create the zipped library for AWS Lambda layer:
Create and navigate to a directory :
$mkdir aws1
$cd aws1
Write the below commands in Dockerfile and exit by CTRL + D :
$cat> Dockerfile
FROM amazonlinux:2017.03
RUN yum -y install git \
python36 \
python36-pip \
zip \
&& yum clean all
RUN python3 -m pip install --upgrade pip \
&& python3 -m pip install boto3
You can provide any name for the image :
$docker build -t pythn1/lambda .
Run the image :
$docker run --rm -it -v ${PWD}:/var/task pythn1/lambda:latest bash
Specify the package which you want to zip, in requirements.txt and exit by CTRL + D :
$ cat > requirements.txt
pandas
sklearn
You can try using correct file structure (/python/lib/python3.6/site-packages/) here, but I did not test it yet :
$pip install -r requirements.txt -t /usr/lib/python3.6/dist-packages/
Navigate to the below directory :
$cd var/task
Create a zip file :
$ zip -r ./layers.zip /usr/lib/python3.6/dist-packages/
You should be able to see a layers.zip file in aws1 folder. If you provide the correct folder structure while installing, then the below steps are not required. But, with the folder structure I used, below commands are required :
Unzip layers.zip.
Exit Docker or open a new terminal and navigate to the folder where you unzipped the file. Unzipped file will be in the folder structure /usr/lib/python3.6/dist-packages/.
Copy these files to the correct folder structure :
$ cp -r ./python/lib/python3.6/site-packages/ /usr/lib/python3.6/dist-packages/
Zip them again :
$ zip -r ./lib_python.zip ./python
Upload the zip file to the layer, and add that layer to your Lambda function. Also, make sure that you select the right running environment while creating the layer.
Following this document - https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path, you should zip python\lib\python3.7\site-packages\pandas (and other dependencies) folder for your python layers.
Make sure you add the layer to your function and follow the documentation for the right permissions.
I appreciate the answers that were given, just posting my own answer (that I found after a whole day looking) here for reference purpose.
I followed this guide and also this guide.
In summary, the steps to what I did are:
Connect to my Amazon EC2 instance (running on Linux) through ssh. I
wanted to deploy an application on Beanstalk so it was already up for
me anyway.
Follow the steps in the first guide to install python 3.7.
Follow the steps in the second guide to install the libraries. One of
the key notes is not to install with pip install -t since that
will lead to the libraries and the C extensions not built.
Zip the directory found in python\lib\python3.7\site-packages\ as
mentioned by the answers here (although I did follow the directory
guide in my first attempts)
Get the file from EC2 instance through
FileZilla.
Follow the Lambda layers guide and it is done.
I'm trying to install python package in editable mode with:
pip3 install -e ./
setup.py file contains:
data_files=[
(os.path.expanduser("~") + "/.xxx", ["xxx/yyy.data"])
],
After installation the yyy.data file is not copied to .xxx folder.
Is there an option to create data files outside of the package folder when working in editable mode?
The truth is data_files has caveats. See No single, complete solution for packaging data issue on the list of Problems in Python Packaging, note in data_files section of Packaging and Distributing Project tutorial from Python Packaging User Guide, pip's bug All packages that contain non-package data are now likely installed in a broken way since 7.0.0 and wheel's bug bdist_wheel makes absolute data_files relative to site-packages.
According to information gathered from above sources your data was installed into site-packages directory instead of your home directory as you were expecting.
I have a recipe to compile a printer driver and have a few simple lines to run in do_install.
do_install() {
install -d ${D}${libdir}/cups/filter
install -m 755 ${B}/src/rastertoprinter ${D}${libdir}/cups/filter/
install -d ${D}${sysconfdir}/cups/ppd
install -m 755 ${B}/../rastertoprinter/printer_name.ppd ${D}${sysconfdir}/cups/ppd/
}
To compile the source I have a DEPENDS on cups and also an RDEPENDS on cups as the OS needs cups installed to print of course.
The printer driver is not publicly available so as such I've renamed it to rastertoprinter and changed my path names.
Essentially I need to simply create or ensure the directory /usr/lib/cups/filter exists, and copy the rastertoprinter program there. I also need to create or ensure the directory /etc/cups/ppd exists and copy the .ppd file into that directory.
The first two lines run fine but the 3rd throws the following errors:
file /etc/cups conflicts between attempted installs of printername-r0.corei7_64 and cups-2.2.2-r0.corei7_64
file /etc/cups/ppd conflicts between attempted installs of printername-r0.corei7_64 and cups-2.2.2-r0.corei7_64
I don't understand why both recipes can't create this directory and put stuff in it? Strangely I'm able to do the first /usr/lib/cups/filter directory though fine.
Turns out the issue is that each file to be packaged in Yocto will also generate a %dir for each parent of each file. We don't want to own a directory that is owned by another package, so if you add this to your recipe:
DIRFILES = "1"
It will cause your package to not own parent directories of the files you package.
This will generate an rpm spec file without the %dir entries.
I'm trying to use wxHaskell to modify wxAsteroids. Every time I try to change something within the wxAsteroids1.1.0.1.tar.gz folder it does not affect the program. Is there a way to update the wxAsteroids.exe in order to use the code? Unzipping the .tar.gz gives me this error when trying to run Asteroids.lhs:
could not find module 'Paths_wxAsteroids'
Use -v to see a list of the files searched for.
tar xzf wxAstroids*
This will get your the package in a wxAstroids-<version> directory. Change to that directory and...
vi xAstroids.cabal
-- Edit the file, add Paths_wxAstroids to the other-modules: list.
Now the package should be fixed, so reinstall this fixed version
cabal install
Yay!
Im trying to install python-unicodecsv in python 3.0 for Odoo. but it say "unable to locate package python-unicodecsv
You can just make the download in below from the git repository
https://github.com/jdunck/python-unicodecsv
Installation steps:
Step 1. Download and Extract the .zip file
download directly as zip file and extract it from the below way
Extract within the same directory :
unzip python-unicodecsv-master.zip <your .zip file name>
Extract within the another directory :
unzip python-unicodecsv-master.zip <your .zip file name> -d <direcroty path>
Step 2. Install the .zip file using terminal in Ubuntu
just go to the extracted directory path through the terminal then type the below command
sudo python setup.py install
then finally your library installed successfully and you can
access the all the library which are related to the python-unicodecsv and used it in your python file.
I hope this should helpful for you :)