pip compiling vs binaries - linux

Sometimes pip install launches a lengthy compilation process. Sometimes it does not. This was most notable with numpy, because it takes significant time to compile but is negligible time when installing binaries. I have an Ubuntu 14 machine where it always compiles numpy, and an Ubuntu 16 machine where it never compiles.
I assumed that Ubuntu 14 packages were no longer available or something. But then I launched a brand new VM with this same older OS, and pip install numpy, went super fast (no compiling). So clearly it is not simply the OS version impacting me. What is going on here?

It's probably a difference in the version of pip you're using. Both binary and source wheels exist in pypi so the question is which will pip choose. I found that on the same machine, pip 1.5.4 would choose the source wheel but pip 9.0.1 would choose the binary one.
Newer versions of pip (ex. 9.0.1) have options for controlling this behaviour: https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-no-binary
I'm guessing maybe older versions of pip don't even support binary wheels.
So try upgrading pip and then it should install without compiling.

Related

is there any way to install cupy without Cuda

pip install cupy won't install cupy without cuda.
Linux: lubuntu v21.10
pip v3.10
conda v22.11.1
I have linux installed on an older laptop and it doesn't have any dedicated GPU.
I have a python framework I'm trying to test out and it requires cupy to run. I tried pip install cupy but it wouldn't run without any cuda installation.
I tried conda install cupy but the framework won't run when conda install cuda
https://pypi.org/project/cupy/ is "NumPy & SciPy for GPU" (emphasize mine — phd). You cannot install it without CUDA.
If you want to use pyVHR without GPU you need to switch to the branch pyVHR_CPU. There is pyVHR_CPU_env.yml to create CPU-ony conda environment. See the installation instructions.
Another approach would be to install an emulator (qemu, VirtualBox, etc.) and configure it to emulate a GPU. Not sure if it worth the trouble in terms of speed.

how does conda check packages for compatibility?

In my environments created with anaconda, the same packages installed with conda are not compatible when I try to install with pip.
Is there a difference how pip and conda handle dependencies?
Here an example of requirements.txt
# Python version 3.9.13
django==2.2.5
djangorestframework==3.14.0
gensim==4.1.2
joblib==1.1.1
nltk==3.7
numpy==1.21.5
openpyxl==3.0.9
pandas==1.4.4
pickleshare==0.7.5
scikit-learn==1.1.3
seaborn==0.12.0
spacy==3.3.1
tensorflow==2.9.1
unidecode==1.2.0
conda allows you to create the environment, pip reports incompatibility between django and djangorestframework.
Conda checks, if all packages that will end up in the environment are compatible with each other and tries to find the optimal solution - considering all package versions.
Pip is less strict and only checks, if the new package is compatible with the existing ones. It doesn't change versions of the previously installed packeages.
Pip installs packages from from pypi.org, while conda installs from anaconda.org. The packages are not exactly the same, since the Anaconda staff authors new packages and tries to increase their compatibility with the older ones.
However, sometimes you are not interested in 100% compatibility but just want to use the latest features. Then pip is good enough because your unit tests will tell you if something goes wrong.

PyPi + setuptools: Developing in `python3.10`; Running on `python3.8`

I enjoy developing in python3.10 with it's newer features like match/case and type hints.
It is my understanding that more people will have python3.8 installed on their system since it is the default python3 version (Default for sudo apt install python3).
Is there any way to allow users with python3.8 to run my PyPi packages? Is it generally good practice to omit these newer features so that your package can reach a larger audience?

why is the pipenv install of spyder causing a complete python tree install?

Just installing spyder3 in a pipenv
reports it finds the spyder requirement (and 2 others) already satisfied. However, pipenv continues with the installation of ~600 MB of python packages in the virtual environment.
To work with spyder3 and the latest ipywidgets on ubuntu disco, I set up a
pipenv --three --site-packages.
I am working on scipy simulation code and want to have simple widgets for some needed config functionality, so I thought: "why not do use widgets in a jupyter notebook?"
The ipywidgets distro package release (6.0.*) is quite behind the latest version (7.5.*), so some things were uglier or unimplemented.
So before further steps, I want to set up spyder within the created pipenv. It is already installed an the system, so there shouldn't be happening too much:
pipenv install spyder
But this results in a longer waiting period and the install of 83 packages,
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 83/83 — 00:00:50
For getting what I already had.
python 3.7.3
spyder 3.3.2
I'd like to understand why the install is getting so big, since this is the kind of behavior I'd rather avoid (i.e. by not using anaconda).

Not able to install scipy, matplotlob and scikit-learn using pip 1.5.6

Trying to install
pip install numpy
pip install scipy
pip install matplotlib
pip install scikit-learn
It failed with scipy, matplotlib and scikit-learn.
(from https://pypi.python.org/simple/scipy/) because it is not compatible with this Python
Skipping
My python version is 3.4 and pip version is 1.5.6
please help me install those above package
With pip 1.5.6 it will try to compile those projects from source which requires a lot of system dependencies (especially for scipy, you need gfortran and an optimized BLAS/LAPACK implementation).
I assume you are using the system provided version of pip under Linux. I would recommend to either use the latest version of pip (8.1 or later) in an a virtualenv (to avoid replacing the files of the system installed version of pip). Then you should be able to install manylinux wheels which do not require the compilation step.
Alternatively you can install miniconda and install those packages with the conda command line instead of pip.
Forget shitty pip, which is flawed beyond repair (static linking etc.)
Download IPython with the Anaconda Suite.... https://www.continuum.io/downloads
It brings most of the needed modules for scientific computing (as it is a crappy task if you have to download stuff to site-packages and run python setup.py install 3781 times..)
I wrote several programs using matplotlib, scipy, numpy etc with it..
Moreover it sports module package manager (comparable to Synaptic on Ubuntu..) if you are to lazy for the above mentioned task (and you are..).
Greets Dr. Cobra

Resources