Conda environments includes packages from different environment - python-3.x

Hi everyone I have a strange problem with packages installed via pip in conda environments. I want to set up two conda environments: env1 and env2 using yml files as e.g. the following (only the packages installed via pip vary in env2)
name: env1
channels:
- defaults
- anaconda-fusion
dependencies:
- cudatoolkit=9.0=1
- cudnn=7.6.0=cuda9.0_0
- pip=19.1.1
- python=3.6.5
- shapely==1.6.4
- pip:
- argparse==1.4.0
- BeautifulSoup4==4.7.1
- cython==0.28.5
- h5py==2.9.0
- imageio==2.5.0
- imgaug==0.2.9
- imutils==0.5.2
...
If I install env1 everything works fine. Activating env1 and typing pip list -v shows that all packages are located in C:\ProgramData\Anaconda3\envs\env1\Lib\site-packages as it should be. Now I deactivate env1 and install env2 and that is where the problem occurs. It is desired that all packages listed in the yml of env2 are installed separately for env2 and located in C:\ProgramData\Anaconda3\envs\env2\Lib\site-packages. Instead all packages that are also in env1 are considered to be already installed and only the reminder is installed in env2. Activating env2 and typing pip list -v shows all packages I need but most of them are located in C:\ProgramData\Anaconda3\envs\env1\Lib\site-packages and only the 'new' ones in C:\ProgramData\Anaconda3\envs\env2\Lib\site-packages. It seems like env2 has some kind of visibilty or access to env1. This is higly unwanted and causes particular severe problems if env1 and env2 have same packages with different versions. Reinstallation of Anaconda did not work. I could not find any suspicious entry in environment varaibles either. Anyone some helpful thoughts or ideas? Would be appreciated a lot.
Edit: Would like to add that the entire procedure works fine on several other systems. What could cause the error on one specific system?

Related

How to install conda packages in the aarch64 from x86-64 architecture

May I know how can I get the installed packages from a particular environment from x86-64 architecture linux and
how can I create a new conda environment in aarch64 architecture using the same package?
First in the x86-64 architecture linux machine called L2, I export the package
conda list --export > envconda.txt
When I open the envconda.txt, it is
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
_libgcc_mutex=0.1=main
_r-mutex=1.0.0=anacondar_1
I changed the platform : linux-64 to linux-aarch64 because I am going to install the packages in the aarch64 architecture.
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-aarch64
_libgcc_mutex=0.1=main
_r-mutex=1.0.0=anacondar_1
In aarch64 linux machine called L1, I create a conda environment
conda create -n envtest --file envconda.txt
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- setuptools==36.4.0=py36_1
- kiwisolver==1.1.0=pypi_0
- pyyaml==3.13=pypi_0
- jedi==0.10.2=py36_2
- libgcc==5.2.0=0
- jsonschema==2.6.0=py36_0
- ptyprocess==0.5.2=py36_0
- prompt_toolkit==1.0.15=py36_0
- libstdcxx-ng==9.1.0=hdf63c60_0
- tqdm==4.36.1=pypi_0
- tomli==1.2.3=pypi_0
- astor==0.7.1=pypi_0
- argparse==1.4.0=pypi_0
- pycparser==2.19=pypi_0
- testpath==0.3.1=py36_0
- cudnn==7.6.5=cuda10.2_0
- asn1crypto==0.22.0=py36_0
- dataclasses==0.8=pypi_0
- platformdirs==2.4.0=pypi_0
- krbcontext==0.10=pypi_07
- decorator==4.1.2=py36_0
- lazy-object-proxy==1.7.1=pypi_0
- gsl==2.2.1=0
- pexpect==4.2.1=py36_0
- icu==54.1=0
- freetype==2.5.5=2
- bleach==1.5.0=py36_0
- matplotlib==3.1.1=pypi_0
- wheel==0.29.0=py36_0
- cudatoolkit==10.2.89=hfd86e86_1
- glib==2.50.2=1
- kneed==0.7.0=pypi_0
- sqlite==3.13.0=0
- importlib-metadata==1.7.0=pypi_0
- python==3.6.2=0
- jpeg==9b=0
- pango==1.40.3=1
- fontconfig==2.12.1=3
- resampy==0.2.2=pypi_0
- nbformat==4.4.0=py36_0
- pixman==0.34.0=0
- scikit-learn==0.21.3=pypi_0
- termcolor==1.1.0=pypi_0
- typed-ast==1.5.4=pypi_0
- keras-applications==1.0.8=pypi_0
- harfbuzz==0.9.39=2
- libffi==3.2.1=1
- jupyter_client==5.1.0=py36_0
- gssapi==1.6.9=pypi_0
- curl==7.54.1=0
- keras==2.2.4=pypi_0
- isort==5.10.1=pypi_0
- simplegeneric==0.8.1=py36_1
- joblib==0.14.0=pypi_0
- pypandoc==1.6.3=pypi_0
- python-dateutil==2.8.2=pypi_0
- ipython_genutils==0.2.0=py36_0
- pyparsing==2.4.2=pypi_0
- ca-certificates==2022.6.15=ha878542_0
- krb5==1.13.2=0
- path.py==10.3.1=py36_0
- markdown==3.0.1=pypi_0
- requests-kerberos==0.12.0=pypi_0
- hdfs==2.5.8=pypi_0
- traitlets==4.3.2=py36_0
- tornado==4.5.2=py36_0
- librosa==0.7.0=pypi_0
- pyasn1==0.4.8=pypi_0
- blas==1.0=mkl
- zlib==1.2.11=0
- libogg==1.3.2=h14c3975_1001
- mkl==2017.0.3=0
- terminado==0.6=py36_0
- libflac==1.3.1=hf484d3e_1002
- python-levenshtein==0.12.2=pypi_0
- werkzeug==0.14.1=pypi_0
- pyspark==2.3.2=pypi_0
- urllib3==1.26.9=pypi_0
- bzip2==1.0.6=3
- html5lib==0.9999999=py36_0
- pywavelets==1.1.1=pypi_0
- zeromq==4.1.5=0
- pykerberos==1.2.1=pypi_0
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-aarch64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-aarch64
- https://repo.anaconda.com/pkgs/r/noarch
- https://conda.anaconda.org/conda-forge/linux-aarch64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
May I know how can I install the packages successfully in the aarch64 architecture?
Last but not least, when I install package using pip install numpy, I got this error Illegal instruction (core dumped)
For this issue, may I know how can I solve this also in linux aarch64 architecture?
Very unlikely this will work for multiple reasons:
Package support off the major platforms (osx-64, linux-64, win-64) is sparse, especially further back in time. A concrete example is cudatoolkit, which only has linux-aarch64 builds starting with version 11.
Overgeneralized environment. The more packages included in an environment, the more difficult it becomes to solve it, and solving across platforms aggravates this problem. I would, for example, remove any Jupyter-related packages completely. In the future, try to plan ahead to have dedicated environments to specific projects, and only install the packages that are absolutely required.
Some packages are completely incompatible. For example mkl is architecture specific.
Nevertheless, if you want to attempt recreating an approximation of the environment, there are some options. First, one cannot achieve this with conda list --export - that simply does not handle environments that have packages from PyPI installed.
PyPI-centric Approach
Because so much of the environment is from PyPI, my first inclination is to recommend abandoning the Conda components and going a pip route. That is, use
pip list --format=freeze > requirements.txt
to capture the Python packages, then create a new environment with something like:
environment.yaml
name: foo
channels:
- conda-forge
- nodefaults
dependencies:
- python=3.6
- pip
- pip:
- -r requirements.txt
With both requirements.txt and environment.yaml in the same folder, the environment is created with
## "foo" is arbitrary - pick something descriptive
conda env create -n foo -f environment.yaml
Retaining some Conda packages
You could also try keeping some parts from Conda by mixing together a conda env export and the previous pip list. Specifically, export a minimal environment definition, with
conda env export --from-history > environment.yaml
Edit this file to include a specific version of Python, remove any packages that are not available for linux-aarch64 (like mkl), and add the pip: section, as above:
environment.yaml
#...
dependencies:
- python=3.6
# ...
- pip
- pip:
- -r requirements.txt
This is then used with:
conda env create -n foo -f environment.yaml
Expect to iterate several times to discover what cannot be found for the platform. I would strongly recommend using mamba instead of Conda in order to minimize this solving time.

conda list vs pip list differences in conda created environment

I am using conda version 4.5.11, python 3.6.6, and Windows 10.
I create a virtual environment using conda
conda create --name venv
When I check for installed packages
conda list
it is (as expected), empty.
But
pip list
is quite long.
Question #1: Why? - when I create a virtual environment using
python -m venv venv
the pip list is empty.
When I am not in an activated virtual environment, then
conda list
is also quite long, but it isn't the same as the pip list (* see follow up below)
In general, the pip list is a subset of the conda list. There is at least one exception ('tables' in the pip list, not in conda list) but I haven't analysed too closely. The conda list changes/displays some (all?) hyphens to underscores (or pip does the reverse). There also a few instances of versions being different.
Question #2: Why? (and follow up questions - can they be? and should I care?)
I was hoping to have a baseline conda 'environment' (that may not be the right word) -ie, the packages I have installed/updated into Ananconda/conda and then all virtual environments would be pulled from that. If I needed to install something new, it would be first installed into the baseline. Only when I need to create an application using different versions of packages from the baseline (which I don't envision in the foreseeable future) would I need to update the virtual environments differently.
Question #3: Am I overthinking this? I am looking for consistency and hoping for understanding.
-- Thanks.
Craig
Follow Up #1: After installing some packages to my empty conda venv, the results of conda list and pip list are still different, but the pip list is much shorter than it was, but is a subset of the conda list (it does not include two packages I don't use, so I don't care)
Follow Up #2: In the empty environment, I ran some code
python my-app.py
and was only mildly surprised that it ran without errors. As expected, when I installed a package (pytest), it failed to run due to the missing dependencies. So ... empty is not empty.
1. conda list vs pip list
If all you did was create the environment (conda create -n venv), then nothing is installed in there, including pip. Nevertheless, the shell is still going to try to resolve pip on using the PATH environment variable, and is possibly finding the pip in the Anaconda/Miniconda base environment.
2. pip list is subset of conda list outside env
This could simply be a matter of conda installing things other than Python packages, which pip doesn't have the option to install. Conda is a more generic package manager and brings in all the dependencies (e.g., shared libraries) necessary to run each package - by very definition this is a broader range than what is available from the PyPI.
3. Overthinking
I think this is more of a workflow style question, and generally outside the scope of StackOverflow because it's going to get opinionated answers. Try searching around for best practice recommendations and pick a style suited to your goals.
Personally, I would never try to install everything into my base/root Conda environment simply because the more one installs, the more one has dependency requirements pulling in different directions. In the end, Conda will centralize all packages anyway (anaconda/pkgs or miniconda3/pkgs), so I focus on making modular environments that serve specific purposes.

Installing graph-tool library on conda installed python 3.5, Mac OS

Precisely the same as here, which hasn't been resolved.
Followed the sequential directions here; all channels added.
Tried:
Adding to .bash_profile export PKG_CONFIG_PATH=$PKG_CONFIG_PATH://anaconda/pkgs accordingly the directory where there is cairomm installed..
./configure -with--CAIROMM_CFLAGS -with--CAIROMM_LIBS
Can someone kindly make sure at least I have implemented alternate solutions correctly?
And of course I've tried the simplest conda install graph-tool after adding channels from ostrokach-forge and the like.
Instead of success, I get the following:
PackagesNotFoundError: The following packages are not available from current channels:
Whoa, so unpopular post!
For myself and other novices' use in the future ->
(after all, conda install-ed python is easier)
Slightly different but same in that certain library is not reached, acc. to this query:
conda needs to be able to find all the dependencies at once.
The -c flag only adds that channel for that one command.
conda install -c vgauthier rwest graph-tool. But an easier way is to add those channels to your configuration
conda config --add channels vgauthier --add channels rwest
And then perform
conda install graph-tool
But when I used conda install -c http://conda.anaconda.org/vgauthier graph-tool command, it worked instantly.
Before that, nothing worked. (when I only used the user name vguathier or ostrokach-forge or else)
If I type vi .condarc I see
channels:
- ostrokach-forge
- conda-forge
- defaults
And because I remain quite ignorant in using brew-installed packages with conda-installed python, I started out by following the directions to install all the necessary dependencies using brew. (including pixman)
Wonder how the command found everything though.. plus python upgraded to 3.6 from 3.5.
God this is how I am left with solving os issues.. i'm not 100 % clear how I made the computer connect the dots.
And still, nonetheless, I am still left with how to figure the install with ./configure. I want to understand the error msg that was returned and how to address it.

Export / import conda environment and package including local files

I want to make my analysis reproducible and want to use conda to make sure, specific software of a specific version is used. To do so, I set up an environment including some programs built from local source and scripts exporting some environment variables.
I exported the environment and built a package from the local files (basically following a procedure described here, post #2: >link<):
conda env export > myenv.yml
conda package --pkg-name myenv --pkg-version 0.1 --pkg-build 1
On a different machine, I imported the environment without problems using
conda env create -f myenv.yml
source activate myenv
However, I got some trouble when trying to install the package:
conda install myenv-0.1-1.tar.bz2
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package '<unknown>::myenv-0.1-1'.
FileNotFoundError(2, 'No such file or directory')
Attempting to roll back.
FileNotFoundError(2, 'No such file or directory')
So I read a bit about channels and tried setting up a local channel with the package:
mkdir -p own_pkg/linux-64
mkdir -p own_pkg/noarch
mv myenv-0.1-1.tar.bz2 own_pkg/linux-64/
conda index own_pkg/linux-64 own_pkg/noarch
updating: myenv-0.1-1.tar.bz2
I added the following to ~/.condarc
channels:
- defaults
- file://my/path/to/own_pkg
And then tried again to install but still:
conda install myenv
Fetching package metadata .............
PackageNotFoundError: Packages missing in current channels:
- myenv
We have searched for the packages in the following channels:
- https://repo.continuum.io/pkgs/main/linux-64
- https://repo.continuum.io/pkgs/main/noarch
- https://repo.continuum.io/pkgs/free/linux-64
- https://repo.continuum.io/pkgs/free/noarch
- https://repo.continuum.io/pkgs/r/linux-64
- https://repo.continuum.io/pkgs/r/noarch
- https://repo.continuum.io/pkgs/pro/linux-64
- https://repo.continuum.io/pkgs/pro/noarch
- file://my/path/to/own_pkg/linux-64
- file://my/path/to/own_pkg/noarch
Even so in /my/path/to/own_pkg/linux-64 the files .index.json, repodata.json etc. exist and the package is named and the tar.bz2 file referenced therein.
Can someone explain to me what I am doing wrong and/or what the appropriate workflow is to achieve my goal?
Thanks!
More information:
Source machine:
Linux Ubuntu 16.04
conda version 4.4.7
conda-build version 3.2.1
Target machine:
Scientific Linux 7.4
conda version 4.3.29
conda-build version 3.0.27

pyvenv-3.4 pip freeze showing multiple packages in new environment

I am using pyvenv-3.4 on Ubuntu 12.04 and just created my first virtual environment.
After activating i checked to see that no packages had been installed using pip freeze and found the following list of packages..
Brlapi==0.5.6
GnuPGInterface==0.3.2
Mako==0.5.0
MarkupSafe==0.15
PAM==0.4.2
PIL==1.1.7
Twisted-Core==11.1.0
Twisted-Names==11.1.0
Twisted-Web==11.1.0
adium-theme-ubuntu==0.3.2
apt-xapian-index==0.44
apturl==0.5.1ubuntu3
argparse==1.2.1
chardet==2.0.1
command-not-found==0.2.44
configglue==1.0
debtagshw==0.1
defer==1.0.2
dirspec==3.0.0
duplicity==0.6.18
httplib2==0.7.2
jockey==0.9.7
keyring==0.9.2
language-selector==0.1
launchpadlib==1.9.12
lazr.restfulclient==0.12.0
lazr.uri==1.0.3
louis==2.3.0
nvidia-common==0.0.0
oauth==1.0.1
onboard==0.97.1
oneconf==0.2.8.1
pexpect==2.3
piston-mini-client==0.7.2
protobuf==2.4.1
pyOpenSSL==0.12
pycrypto==2.4.1
pycups==1.9.61
pycurl==7.19.0
pyinotify==0.9.2
pyserial==2.5
pysmbc==1.0.13
python-apt==0.8.3ubuntu7.2
python-dateutil==1.5
python-debian==0.1.21ubuntu1
python-virtkey==0.60.0
pyxdg==0.19
reportlab==2.5
rhythmbox-ubuntuone==4.2.0
screen-resolution-extra==0.0.0
sessioninstaller==0.0.0
simplejson==2.3.2
software-center-aptd-plugins==0.0.0
stevedore==0.15
system-service==0.1.6
ubuntuone-couch==0.3.0
ubuntuone-installer==3.0.2
ubuntuone-storage-protocol==3.0.2
ufw==0.31.1-1
unattended-upgrades==0.1
unity-lens-video==0.3.5
unity-scope-video-remote==0.3.5
usb-creator==0.2.23
vboxapi==1.0
virtualenv==1.11.4
virtualenv-clone==0.2.4
virtualenvwrapper==4.2
wadllib==1.3.0
wsgiref==0.1.2
xdiagnose==2.5.3
xkit==0.0.0
zope.interface==3.6.1
As this is a new activated environment why would i see the list of packages already installed in Ubuntu?
Apologies if i am missing something obvious but i expected this to be empty.
Any insight would be appreciated!!
If you're using the latest version of virtualenv, --no-site-packages isn't necessary anymore. I highly recommend not relying on python modules from aptitude :).
You can also do pip freeze --local > requirements.txt This will output only the packages installed into your virtual env, without listing all dependencies (the packages themselves, handle those.)
UPDATE
pyenv is problematic, that's why you are getting extra packages in requirements.txt.
You can remove pyenv and install latest version of virtualenv and ask virtualenv to create the env for you by this command.
sudo virtualenv --no-site-packages -p /usr/bin/python3.4 <envname>

Resources