After installing any conda package, login name disappears from Linux terminal. For reference please see screenshot.
Looks like you installed conda 22.9.0. Changes were made to the shell interface requiring a one-time re-initialization of conda:
conda init bash
After running the above command, you will need to restart your shell for the changes to take effect.
See: https://github.com/conda/conda/issues/11885
I want to create a bash script to install a new virtual environment "ABC" in conda. But before I go ahead and run a command to create this env, I want to check if conda is already installed on the machine. If not installed, I want to install miniconda and then create the env "ABC". If conda is already installed then I would just go ahead and create the environment. (All this should happen within the same script)
I just want to know if it is possible to check the existence of conda within a bash script and then proceed with the rest of the installations?
'''
#!/bin/bash
<code_to_check_existence_of_conda_env_here ?>
//If it does not exist, I will run the below code
mkdir -p miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O
miniconda3/miniconda.sh
bash miniconda3/miniconda.sh -b -u -p ~/miniconda3
conda env update -f my_env.yml
'''
The main issue here is that Conda has multiple components. Which components are loaded depends on the how Conda is installed and what user the BASH script is executing under. I'll try describing the components and hopefully you can decide what is suitable to verify.
Conda Components
1: Conda Package
The first is a Python package conda, which is installed in the Anaconda/Miniconda/Miniforge base environment. If the base environment is activated, one could test
python -m conda
which will give something like
/path/to/python: No module named conda
if it isn't there. Otherwise, it outputs the conda entrypoint's documentation.
2: Conda Entrypoint
The entrypoint conda, which acts as a CLI, is located under the condabin directory of the Anaconda/Miniconda/Miniforge installation. When a user runs conda init, a managed section is added to their shell initialization file (.bashrc for Linux BASH), that includes code to add the condabin to PATH. This is most likely what OP wants to identify, however, running with the shebang /bin/bash will not load the .bashrc file. Instead, one should probably be using
#!/bin/bash -l
or
#!/usr/bin/env bash -l
Then the entrypoint can be located with
which conda
3: Conda Activate
Finally, Conda also includes some shell-only functions, which are defined in the aforementioned shell initialization code. This sets up a middleman shell function, also called conda (essentially an alias), which can be viewed with
type conda
This function serves to determine whether the conda (de)?activate commands are being requested, which are pure shell functions, or something that needs to be forwarded to the entrypoint.
Recommendation
Were I designing this, I would write an interactive script that checks for #2 (which conda) and if that comes up blank then prompt the user to either provide the PATH to the Conda entrypoint (maybe they installed it in a weird place or didn't run conda init) or install Miniforge.1
I also would not use the base environment to install stuff - that is a bad idea for an end-user, let alone a third-party - and instead create a new environment. I would prompt the user with a specific default environment name, but also provide an option for them to customize.
[1] Yes, Miniforge, not Miniconda. Commercial use of the Anaconda defaults channels now requires a paid license, so better to use the free Miniforge.
I'm not really good at bash scripting but I would execute a command:
pip3 freeze | grep conda
and based on the output (if exit status was an error or not) install it or directly create the environment.
run the command conda list on your machine, the output will of be different when its installed vs not installed, then run if statement on the output.
In bash, if we want to check if a software is installed or not, the following check is used.
if dpkg -l $SOFTWARE; then
<Do stuff>
ensureconda
This sounds like the intended problem that the experimental tool ensureconda was designed to solve. This, however, requires a Python installation with pip:
pip install ensureconda
Here are the command options:
$ ensureconda --help
Usage: ensureconda [OPTIONS]
Ensures that a conda/mamba is installed.
Options:
--mamba / --no-mamba search for mamba
--micromamba / --no-micromamba search for micromamba, install if not
present
--conda / --no-conda search for conda
--conda-exe / --no-conda-exe search for conda.exe / conda-standalone,
install if not present
--no-install don't install conda/mamba if no version can
be discovered
--min-conda-version VERSIONNUMBER
minimum version of conda to accept (defaults
to 4.8.2)
--min-mamba-version VERSIONNUMBER
minimum version of mamba/micromamba to
accept (defaults to 0.7.3)
--help Show this message and exit.
Type conda activate
or conda activate env_name
The first command will activate the base environment if it is already installed. The second command will do activate the user created environment if it is installed.
I have a virtualenv folder on google drive that I'd like to work on with computer1 and computer2.
However I'm finding that the virtualenv is almost set up for computer1 only. When I CD to the folder with my virtualenv on computer2, activate the virtualenv, and then run the command
python --version
I get the error:
No Python at 'C:\Users\computer1_user\AppData\Local\Programs\Python\Python39\python.exe'
Which is a folder on computer1, not computer2.
Additionally, when going through some of the files in venv/Scripts such as the activate.txt file, it has lines such as:
VIRTUAL_ENV="C:\Users\computer1_user\programming\data_science\python\file\venv"
export VIRTUAL_ENV
Which is a folder on computer1.
Basically, how do I set up a virtualenv that can be accessed, changed, and saved from multiple machines? It seems the setup I have now has too many connections to computer1.
If you share a project with others, use a build system, or plan to copy the project to any other location where you need to restore an environment, you need to specify the external packages that the project requires. The recommended approach is to use a requirements.txt file that contains a list of commands for pip that installs the required versions of dependent packages. The most common command is
pip freeze > requirements.txt, which records an environment's current package list into requirements.txt.
for more information go to pip freeze
If you want to install the dependencies in a virtual environment, create and activate that environment first, then use the Install from requirements.txt command. For more information on creating a virtual environment, see Use virtual environments.
I noticed that
$ pip3 install virtualenv
installed the script into
~/Documents/bitbucket-python-scripts/.env35/bin though virtual
environment .env35 (created earlier) was not active at the time of
issuing the command.
From ~/.profile
PATH="/home/gsmith/Documents/bitbucket-python-scripts/.env35/bin:$PATH"
export WORKON_HOME=~/Documents/bitbucket-python-scripts/.env35
what is the correct place (dir) where virtualenv is installed? I would
think that virualenv script should not depend on whether I have created
virtual environments or not. Is this correct?
Please also clarify the following. Each created (by virtualenv)
virtual environment has bin and site-packages subdirs. ~/.local also
does. I understand that when I use
(.env35) $ pip3 install aiohttp
a package is installed into the active virtual environment.
When ~/.local is used? Should I setup something additionally?
Using:
Python 3.5.3;
Debian GNU/Linux 9.8 (stretch)
UPDATED 04/29
Through experiment found that if no virtual environments found in the PATH variable, pip3 installs virtualenv script into ~/.local/bin
I have a conda environment at home that I'm using on my Ph.D., but now that I'm needing more computational power I have to transfer (or install a perfect copy) of my environment on one of the University's computers. The computers have no internet connection, all I have is SSH.
My attempt to copy the entire /anaconda3 directory and .bashrc to a path similar to the one I use at my home (/home/henrique/bin) have not worked.
What is the correct way to transfer my anaconda install?
Conda-pack is a command line tool that archives a conda environment, which includes all the binaries of the packages installed in the environment. This is useful when you want to reproduce an environment with limited or no internet access. All the previous methods download packages from their respective repositories to create an environment. Keep in mind that conda-pack is both platform and operating system specific and that the target computer must have the same platform and OS as the source computer.
To install conda-pack, make sure you are in the root or base environment so that it is available in sub-environments. Conda-pack is available at conda-forge or PyPI.
conda-forge:
conda install -c conda-forge conda-pack
PyPI:
pip install conda-pack
To package an environment:
# Pack environment my_env into my_env.tar.gz
$ conda pack -n my_env
# Pack environment my_env into out_name.tar.gz
$ conda pack -n my_env -o out_name.tar.gz
# Pack environment located at an explicit path into my_env.tar.gz
$ conda pack -p /explicit/path/to/my_env
To install the environment:
# Unpack environment into directory `my_env`
$ mkdir -p my_env
$ tar -xzf my_env.tar.gz -C my_env
# Use Python without activating or fixing the prefixes. Most Python
# libraries will work fine, but things that require prefix cleanups
# will fail.
$ ./my_env/bin/python
# Activate the environment. This adds `my_env/bin` to your path
$ source my_env/bin/activate
# Run Python from in the environment
(my_env) $ python
# Cleanup prefixes from in the active environment.
# Note that this command can also be run without activating the environment
# as long as some version of Python is already installed on the machine.
(my_env) $ conda-unpack
Source