I want to create a bash script to install a new virtual environment "ABC" in conda. But before I go ahead and run a command to create this env, I want to check if conda is already installed on the machine. If not installed, I want to install miniconda and then create the env "ABC". If conda is already installed then I would just go ahead and create the environment. (All this should happen within the same script)
I just want to know if it is possible to check the existence of conda within a bash script and then proceed with the rest of the installations?
'''
#!/bin/bash
<code_to_check_existence_of_conda_env_here ?>
//If it does not exist, I will run the below code
mkdir -p miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O
miniconda3/miniconda.sh
bash miniconda3/miniconda.sh -b -u -p ~/miniconda3
conda env update -f my_env.yml
'''
The main issue here is that Conda has multiple components. Which components are loaded depends on the how Conda is installed and what user the BASH script is executing under. I'll try describing the components and hopefully you can decide what is suitable to verify.
Conda Components
1: Conda Package
The first is a Python package conda, which is installed in the Anaconda/Miniconda/Miniforge base environment. If the base environment is activated, one could test
python -m conda
which will give something like
/path/to/python: No module named conda
if it isn't there. Otherwise, it outputs the conda entrypoint's documentation.
2: Conda Entrypoint
The entrypoint conda, which acts as a CLI, is located under the condabin directory of the Anaconda/Miniconda/Miniforge installation. When a user runs conda init, a managed section is added to their shell initialization file (.bashrc for Linux BASH), that includes code to add the condabin to PATH. This is most likely what OP wants to identify, however, running with the shebang /bin/bash will not load the .bashrc file. Instead, one should probably be using
#!/bin/bash -l
or
#!/usr/bin/env bash -l
Then the entrypoint can be located with
which conda
3: Conda Activate
Finally, Conda also includes some shell-only functions, which are defined in the aforementioned shell initialization code. This sets up a middleman shell function, also called conda (essentially an alias), which can be viewed with
type conda
This function serves to determine whether the conda (de)?activate commands are being requested, which are pure shell functions, or something that needs to be forwarded to the entrypoint.
Recommendation
Were I designing this, I would write an interactive script that checks for #2 (which conda) and if that comes up blank then prompt the user to either provide the PATH to the Conda entrypoint (maybe they installed it in a weird place or didn't run conda init) or install Miniforge.1
I also would not use the base environment to install stuff - that is a bad idea for an end-user, let alone a third-party - and instead create a new environment. I would prompt the user with a specific default environment name, but also provide an option for them to customize.
[1] Yes, Miniforge, not Miniconda. Commercial use of the Anaconda defaults channels now requires a paid license, so better to use the free Miniforge.
I'm not really good at bash scripting but I would execute a command:
pip3 freeze | grep conda
and based on the output (if exit status was an error or not) install it or directly create the environment.
run the command conda list on your machine, the output will of be different when its installed vs not installed, then run if statement on the output.
In bash, if we want to check if a software is installed or not, the following check is used.
if dpkg -l $SOFTWARE; then
<Do stuff>
ensureconda
This sounds like the intended problem that the experimental tool ensureconda was designed to solve. This, however, requires a Python installation with pip:
pip install ensureconda
Here are the command options:
$ ensureconda --help
Usage: ensureconda [OPTIONS]
Ensures that a conda/mamba is installed.
Options:
--mamba / --no-mamba search for mamba
--micromamba / --no-micromamba search for micromamba, install if not
present
--conda / --no-conda search for conda
--conda-exe / --no-conda-exe search for conda.exe / conda-standalone,
install if not present
--no-install don't install conda/mamba if no version can
be discovered
--min-conda-version VERSIONNUMBER
minimum version of conda to accept (defaults
to 4.8.2)
--min-mamba-version VERSIONNUMBER
minimum version of mamba/micromamba to
accept (defaults to 0.7.3)
--help Show this message and exit.
Type conda activate
or conda activate env_name
The first command will activate the base environment if it is already installed. The second command will do activate the user created environment if it is installed.
I'm using Python 3 on Linux
When I open a new terminal, the name of virtual environment is not shown.
I can install python packages in this (unknown) environment with pip install
What is the name of this current virtual environment ?
How can I get the repository which this virtual environment uses to install packages (via pip) ?
When I open a new terminal, the name of virtual environment is not shown.
The name cannot be shown, because in order to see the name you must first activate the virtual environment. You have to
create a virtual env in a folder
cd into that folder
run sh bin/activate
Then, the name will be shown.
Without an active virtual environment, pip will install the packages into
/usr/lib/python3 when run with admin rights (e.g. using sudo)
$HOME/.local/lib/python3 when run without admin rights
And the repository the packages will be downloaded from should normally be pypi.org.
I know the obvious answer is to use virtualenv and virtualenvwrapper, but for various reasons I can't/don't want to do that.
So how do I modify the command
pip install package_name
to make pip install the package somewhere other than the default site-packages?
The --target switch is the thing you're looking for:
pip install --target=d:\somewhere\other\than\the\default package_name
But you still need to add d:\somewhere\other\than\the\default to PYTHONPATH to actually use them from that location.
-t, --target <dir>
Install packages into <dir>. By default this will not replace existing files/folders in <dir>.
Use --upgrade to replace existing packages in <dir> with new versions.
Upgrade pip if target switch is not available:
On Linux or OS X:
pip install -U pip
On Windows (this works around an issue):
python -m pip install -U pip
Use:
pip install --install-option="--prefix=$PREFIX_PATH" package_name
You might also want to use --ignore-installed to force all dependencies to be reinstalled using this new prefix. You can use --install-option to multiple times to add any of the options you can use with python setup.py install (--prefix is probably what you want, but there are a bunch more options you could use).
Instead of the --target or --install-options options, I have found that setting the PYTHONUSERBASE environment variable works well (from discussion on a bug regarding this very thing):
PYTHONUSERBASE=/path/to/install/to pip install --user
(Or set the PYTHONUSERBASE directory in your environment before running the command, using export PYTHONUSERBASE=/path/to/install/to)
This uses the very useful --user option but tells it to make the bin, lib, share and other directories you'd expect under a custom prefix rather than $HOME/.local.
Then you can add this to your PATH, PYTHONPATH and other variables as you would a normal installation directory.
Note that you may also need to specify the --upgrade and --ignore-installed options if any packages upon which this depends require newer versions to be installed in the PYTHONUSERBASE directory, to override the system-provided versions.
A full example
PYTHONUSERBASE=/opt/mysterypackage-1.0/python-deps pip install --user --upgrade numpy scipy
..to install the scipy and numpy package most recent versions into a directory which you can then include in your PYTHONPATH like so (using bash and for python 2.6 on CentOS 6 for this example):
export PYTHONPATH=/opt/mysterypackage-1.0/python-deps/lib64/python2.6/site-packages:$PYTHONPATH
export PATH=/opt/mysterypackage-1.0/python-deps/bin:$PATH
Using virtualenv is still a better and neater solution!
To pip install a library exactly where I wanted it, I navigated to the location I wanted the directory with the terminal then used
pip install mylibraryName -t .
the logic of which I took from this page: https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/download
Installing a Python package often only includes some pure Python files. If the package includes data, scripts and or executables, these are installed in different directories from the pure Python files.
Assuming your package has no data/scripts/executables, and that you want your Python files to go into /python/packages/package_name (and not some subdirectory a few levels below /python/packages as when using --prefix), you can use the one time command:
pip install --install-option="--install-purelib=/python/packages" package_name
If you want all (or most) of your packages to go there, you can edit your ~/.pip/pip.conf to include:
[install]
install-option=--install-purelib=/python/packages
That way you can't forget about having to specify it again and again.
Any excecutables/data/scripts included in the package will still go to their default places unless you specify addition install options (--prefix/--install-data/--install-scripts, etc., for details look at the custom installation options).
Tested these options with python3.5 and pip 9.0.3:
pip install --target /myfolder [packages]
Installs ALL packages including dependencies under /myfolder. Does not take into account that dependent packages are already installed elsewhere in Python. You will find packages from /myfolder/[package_name]. In case you have multiple Python versions, this doesn't take that into account (no Python version in package folder name).
pip install --prefix /myfolder [packages]
Checks if dependencies are already installed. Will install packages into /myfolder/lib/python3.5/site-packages/[packages]
pip install --root /myfolder [packages]
Checks dependencies like --prefix but install location will be /myfolder/usr/local/lib/python3.5/site-packages/[package_name].
pip install --user [packages]
Will install packages into $HOME:
/home/[USER]/.local/lib/python3.5/site-packages
Python searches automatically from this .local path so you don't need to put it to your PYTHONPATH.
=> In most of the cases --user is the best option to use.
In case home folder can't be used because of some reason then --prefix.
pip3 install "package_name" -t "target_dir"
source - https://pip.pypa.io/en/stable/reference/pip_install/
-t switch = target
Nobody seems to have mentioned the -t option but that the easiest:
pip install -t <direct directory> <package>
pip install packageName -t pathOfDirectory
or
pip install packageName --target pathOfDirectorty
Just add one point to #Ian Bicking's answer:
Using the --user option to specify the installed directory also work if one wants to install some Python package into one's home directory (without sudo user right) on remote server.
E.g.,
pip install --user python-memcached
The command will install the package into one of the directories that listed in your PYTHONPATH.
Newer versions of pip (8 or later) can directly use the --prefix option:
pip install --prefix=$PREFIX_PATH package_name
where $PREFIX_PATH is the installation prefix where lib, bin and other top-level folders are placed.
To add to the already good advice, as I had an issue installing IPython when I didn't have write permissions to /usr/local.
pip uses distutils to do its install and this thread discusses how that can cause a problem as it relies on the sys.prefix setting.
My issue happened when the IPython install tried to write to '/usr/local/share/man/man1' with Permission denied. As the install failed it didn't seem to write the IPython files in the bin directory.
Using "--user" worked and the files were written to ~/.local. Adding ~/.local/bin to the $PATH meant I could use "ipython" from there.
However I'm trying to install this for a number of users and had been given write permission to the /usr/local/lib/python2.7 directory. I created a "bin" directory under there and set directives for distutils:
vim ~/.pydistutils.cfg
[install]
install-data=/usr/local/lib/python2.7
install-scripts=/usr/local/lib/python2.7/bin
then (-I is used to force the install despite previous failures/.local install):
pip install -I ipython
Then I added /usr/local/lib/python2.7/bin to $PATH.
I thought I'd include this in case anyone else has similar issues on a machine they don't have sudo access to.
If you are using brew with python, unfortunately, pip/pip3 ships with very limited options. You do not have --install-option, --target, --user options as mentioned above.
Note on pip install --user
The normal pip install --user is disabled for brewed Python. This is because of a bug in distutils, because Homebrew writes a distutils.cfg which sets the package prefix.
A possible workaround (which puts executable scripts in ~/Library/Python/./bin) is:
python -m pip install --user --install-option="--prefix=" <package-name>
You might find this line very cumbersome. I suggest use pyenv for management.
If you are using
brew upgrade python python3
Ironically you are actually downgrade pip functionality.
(I post this answer, simply because pip in my mac osx does not have --target option, and I have spent hours fixing it)
With pip v1.5.6 on Python v2.7.3 (GNU/Linux), option --root allows to specify a global installation prefix, (apparently) irrespective of specific package's options. Try f.i.,
$ pip install --root=/alternative/prefix/path package_name
I suggest to follow the documentation and create ~/.pip/pip.conf file. Note in the documentation there are missing specified header directory, which leads to following error:
error: install-base or install-platbase supplied, but installation scheme is incomplete
The full working content of conf file is:
[install]
install-base=$HOME
install-purelib=python/lib
install-platlib=python/lib.$PLAT
install-scripts=python/scripts
install-headers=python/include
install-data=python/data
Unfortunatelly I can install, but when try to uninstall pip tells me there is no such package for uninstallation process.... so something is still wrong but the package goes to its predefined location.
pip install /path/to/package/
is now possible.
The difference with this and using the -e or --editable flag is that -e links to where the package is saved (i.e. your downloads folder), rather than installing it into your python path.
This means if you delete/move the package to another folder, you won't be able to use it.
system` option, that will install pip package-bins to /usr/local/bin thats accessible to all users. Installing without this option may not work for all users as things go to user specific dir like $HOME/.local/bin and then it is user specific install which has to be repeated for all users, also there can be path issues if not set for users, then bins won't work. So if you are looking for all users - yu need to have sudo access:
sudo su -
python3 -m pip install --system <module>
logout
log back in
which <module-bin> --> it should be installed on /usr/local/bin/
Sometimes it works only works with Cache argument
-m pip install -U pip --target=C:\xxx\python\lib\site-packages Pillow --cache-dir C:\tmp
Say I have a virtualenv environment myenv1 and another, myenv2. I have some packages installed in myenv1, and I would like to install them on myenv2 as well. Is there a way to install them in myenv2 using myenv1?
I referred to the following link, but apparently that requires a requirement.txt to be present.
However, I have installed many packages on the go, and I do not have a prepared requirement.txt for myenv1. Also, the second step requires downloading the required packages again to a folder. Is there a way to achieve the said without downloading anything again?
I noticed that
$ pip3 install virtualenv
installed the script into
~/Documents/bitbucket-python-scripts/.env35/bin though virtual
environment .env35 (created earlier) was not active at the time of
issuing the command.
From ~/.profile
PATH="/home/gsmith/Documents/bitbucket-python-scripts/.env35/bin:$PATH"
export WORKON_HOME=~/Documents/bitbucket-python-scripts/.env35
what is the correct place (dir) where virtualenv is installed? I would
think that virualenv script should not depend on whether I have created
virtual environments or not. Is this correct?
Please also clarify the following. Each created (by virtualenv)
virtual environment has bin and site-packages subdirs. ~/.local also
does. I understand that when I use
(.env35) $ pip3 install aiohttp
a package is installed into the active virtual environment.
When ~/.local is used? Should I setup something additionally?
Using:
Python 3.5.3;
Debian GNU/Linux 9.8 (stretch)
UPDATED 04/29
Through experiment found that if no virtual environments found in the PATH variable, pip3 installs virtualenv script into ~/.local/bin