For some reason the jupyter notebooks on my VM are in the wrong environment (ie stuck in (base)). Furthermore, I can change the environment in the terminal but not in the notebook. Here is what happens when I attempt !conda activate desired_env in the notebook:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
# conda environments:
#
base * /anaconda
azureml_py36 /anaconda/envs/azureml_py36
azureml_py38 /anaconda/envs/azureml_py38
azureml_py38_pytorch /anaconda/envs/azureml_py38_pytorch
azureml_py38_tensorflow /anaconda/envs/azureml_py38_tensorflow
I tried the answers here (e.g., first running !source /anaconda/etc/profile.d/conda.sh).
I also tried activating the environment using source rather than 'conda activate': !source /anaconda/envs/desired_env/bin/activate. This runs but doesn't actually do anything when I see the current environment in conda env list
Edit: also adding that if I install a package in the (base) environment in the terminal, I still don't have access to it in jupyter notebook.
I'm the PM that released AzureML Notebooks, you can't activate a Conda env from a cell, you have to create a new kernel will the Conda Env. Here are the instructions: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-terminal#add-new-kernels
Related
I have python 3.7.10, conda 4.12.0, and Spyder 5.0.5 installed on Windows 10, and I can see the list of environments with conda env list.
However, when I run conda activate <environment>, CommandNotFoundError appears:
In[1]: conda activate <env>
Note: you may need to restart the kernel to use updated packages.
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If using 'conda activate' from a batch script, change your
invocation to 'CALL conda.bat activate'.
[...]
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I like to change the environment without running Anaconda navigator (or close Spyder every time I change the environment).
Is there a command that I can run on Spyder console to change the virtual environment? Is it also possible to run different environments on different Spyder consoles?
To change environments you will need to change the interpreter preference and restart the console or create a new one. To change the interpreter preference you can go to Tools > Preferences > Python Interprer > Use the following Python interpreter and select there the path for the python.exe of the env you want to use.
For more info regarding how to use existing enviroments with Spyder: https://docs.spyder-ide.org/current/faq.html#using-existing-environment
Detailed info here, tl;dr can be found towards the end...
I've got a bioinformatics workflow I run using snakemake, with a python3 wrapper script using the snakemake API so that the snakemake command is simplified (https://github.com/charlesfoster/covid-illumina-snakemake). Most necessary programs are installed in a 'master' conda environment, while other programs with incompatible dependencies are installed in dedicated environments using conda directives within snakemake rules.
However, some programs cannot be easily included in this manner because they have a more complex installation. An example is pangolin (https://github.com/cov-lineages/pangolin), which requires the pangolin repo to be cloned, a conda environment created, then a "pip install .". Then, to run pangolin within the workflow, I have the following rule:
rule pangolin:
input:
fasta=os.path.join(RESULT_DIR, "{sample}/variants/{sample}.consensus.fa"),
output:
report=os.path.join(RESULT_DIR, "{sample}/pangolin/{sample}.lineage_report.csv"),
shell:
"""
set +eu
eval "$(conda shell.bash hook)" && conda activate pangolin && pangolin --outfile {output.report} {input.fasta} &> /dev/null
set -eu
"""
I've also tried the new named conda environment directive as of snakemake version ~6.15.5:
rule pangolin:
input:
fasta=os.path.join(RESULT_DIR, "{sample}/variants/{sample}.consensus.fa"),
output:
report=os.path.join(RESULT_DIR, "{sample}/pangolin/{sample}.lineage_report.csv"),
conda:
"pangolin"
shell:
"""
pangolin --outfile {output.report} {input.fasta} &> /dev/null
"""
Steps to run the workflow:
conda activate CIS
CIS [options] directory_name/
While this works on my main development PC, when I try to install the pipeline on a new computer, I end up getting the following error:
Could not find conda environment: pangolin
You can list all discoverable environments with `conda info --envs`.
If I run conda info --envs manually within the terminal, I get the following:
$USER/Programs/covid-illumina-snakemake/.snakemake/conda/520fff074cd181af7ee385f2520fdd81
$USER/Programs/covid-illumina-snakemake/.snakemake/conda/cb6755e5de757f643e542e3ec52055b7
base $USER/miniconda3
CIS * $USER/miniconda3/envs/CIS
pangolin $USER/miniconda3/envs/pangolin
If I run conda info --envs within the snakemake workflow itself, I get the following:
$USER/Programs/covid-illumina-snakemake/.snakemake/conda/520fff074cd181af7ee385f2520fdd81
$USER/Programs/covid-illumina-snakemake/.snakemake/conda/cb6755e5de757f643e542e3ec52055b7
$USER/miniconda3
base * $USER/miniconda3/envs/CIS
$USER/miniconda3/envs/pangolin
(username redacted here in both for brevity)
So, as you can see, the names of the environments are no longer detected within the snakemake workflow, and the 'CIS' environment is incorrectly thought to be 'base'. Therefore, the pangolin conda environment cannot be activated by name with eval "$(conda shell.bash hook)" && conda activate pangolin.
tl;dr: conda info --envs has unexpected and different behaviour when invoked from within a snakemake workflow, which is 'driven' by a python script within a 'master' conda env.
Does anyone know why this might be, and/or how to fix it? Is there a better way to activate a named conda environment within a snakemake workflow?
Thanks!
snakemake version: 6.15.5
conda version: 4.11.0
This question already has answers here:
Activate conda environment using subprocess
(1 answer)
Conda command working in command prompt but not in bash script
(4 answers)
Closed 1 year ago.
I want to run the following bash script from inside a python script.
condo activate bne
CUDA_VISIBLE_DEVICES=1 python bne.py --model models/BNE_SGsc --fe ./embeddings/BioEmb/Emb_SGsc.txt --fi names.txt --fo output_BNE_SGsc.txt
My python script/function to do this is as follows -
def do_tensorflow_routine(path_name_file):
os.chdir(os.path.join("../","bne_resources/"))
os.system("conda activate bne")
bne_command = "python bne.py --model models/BNE_SGsc --fe ./embeddings/BioEmb/Emb_SGsc.txt --fi names.txt --fo names_bne_SGsc.txt"
subprocess.run(bne_command,shell=True)
pass
However, I am getting the following error -
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I am using a cluster and my system specifications are as follows -
Static hostname: lab1
Icon name: computer-server
Chassis: server
Machine ID: 104265a0ea5b48c1a3c5a9802294af66
Boot ID: 4277780744ae448292d66a9ff39c76e2
Operating System: Ubuntu 20.04.1 LTS
Kernel: Linux 5.8.0-36-generic
Architecture: x86-64
Please let me know if I am missing something. My basic question is how to run another python script from a given python script, especially when we need to activate and deactivate virtual environments, since the parent python script and child python script require 2 different virtual environments due to some dependency conflicts.
Kindly help me.
Thanks in advance,
Megh
If the terminal does not show (base) after running conda activate bne, then try running:
conda init
In the bash script at the top you need the shebang line #!/usr/bin/env bash and env is used for portability for different distributions of Linux.
I installed latest version of Anaconda3-2020-02.
I was trying to follow this instruction in order to create environment for running with python==3.6 instead of python==3.7., because I have python 3.6. installed.
So, running the
conda create --name snakes python=3.6
and then activating my environment with conda activate snakes, it enters environment (snakes). However there are no anaconda packages inside like jupyter notebook or others, no anaconda-navigator... So whats the purpose of it and how can I run these programs from environment?
Also, for some reason (when I am not in the environment, just regular bash shell) $PATH is not set up to the /bin directory in anaconda, just to /condabin. Can you explain this also, because I am not able to run nothing except conda command from shell after recommended installation.
If by default after installation environment is not activated, you should activate it by sourcing a file anaconda3/bin/activate. You will see indicator (base) on the left of your bash prompt.
Good thing about anaconda3-2020-02 is that it is not messing with system python, as newest python is being launched inside environment and proper$PATH is set up just inside environment. If only, for some reason, specific version of python is needed, then it makes sense to set it up with this instruction. I would be just using default one with python 3.7 probably from (base) environment.
Keep in mind that by default anaconda components are not being set up inside new environment being created. In order to bring them, for example jupyter, you should run the command like this:
conda create --name snakes python=3.6 jupyter
I am writing a bash script with the objective of hosting it on a computing cluster. I want the script to create a conda environment for whichever user executes it, so that everyone on our team can quickly set-up the same working environment.
I realize this is a bit overkill for the number of commands necessary but I wanted to practice some bash scripting. Here is my script so far:
#!/bin/bash
# Load anaconda
module load Anaconda/4.2.0
# Create environment
conda create -n ADNI
# Load environment
source activate ADNI
# Install image processing software
pip install med2image
echo 'A working environment named ADNI has been created.'
echo 'Please run `source activate ADNI` to work in it.'
This script creates the environment successfully. However, once I load the environment after running the script, I run conda list to see which packages are loaded within it and get the following output:
(ADNI) MLG-BH0039:ADNI_DeepLearning johnca$ conda list
# packages in environment at /Users/johnca/miniconda3/envs/ADNI:
#
(ADNI) MLG-BH0039:ADNI_DeepLearning johnca$
This gives me the impression that the environment has no packages loaded in it. Is this correct? If so, how can I alter the script so that the desired packages successfully install into the specified environment.
Thanks!
I managed to find a better way to automate this process by creating an environment.yml file with all the desired packages. This can include pip packages as well. My file looks like this:
name: ADNI
channels:
- soumith
- defaults
dependencies:
- ca-certificates=2017.08.26=h1d4fec5_0
- certifi=2017.11.5=py36hf29ccca_0
- cffi=1.11.2=py36h2825082_0
- freetype=2.8=hab7d2ae_1
- intel-openmp=2018.0.0=hc7b2577_8
- jpeg=9b=h024ee3a_2
- libffi=3.2.1=hd88cf55_4
- libgcc=7.2.0=h69d50b8_2
- libgcc-ng=7.2.0=h7cc24e2_2
- libgfortran-ng=7.2.0=h9f7466a_2
- libpng=1.6.32=hbd3595f_4
- libstdcxx-ng=7.2.0=h7a57d05_2
- libtiff=4.0.9=h28f6b97_0
- mkl=2018.0.1=h19d6760_4
- numpy=1.13.3=py36ha12f23b_0
- olefile=0.44=py36h79f9f78_0
- openssl=1.0.2n=hb7f436b_0
- pillow=4.2.1=py36h9119f52_0
- pip=9.0.1=py36h6c6f9ce_4
- pycparser=2.18=py36hf9f622e_1
- python=3.6.0=0
- readline=6.2=2
- scipy=1.0.0=py36hbf646e7_0
- setuptools=36.5.0=py36he42e2e1_0
- six=1.11.0=py36h372c433_1
- sqlite=3.13.0=0
- tk=8.5.18=0
- wheel=0.30.0=py36hfd4bba0_1
- xz=5.2.3=h55aa19d_2
- zlib=1.2.11=ha838bed_2
- pytorch=0.2.0=py36hf0d2509_4cu75
- torchvision=0.1.9=py36h7584368_1
- pip:
- cycler==0.10.0
I can then automate creating the environment by referencing this file, as in:
#!/bin/bash
# Load anaconda
module load Anaconda/4.2.0
# Create environment
conda env create -f adni_env.yml
echo ' '
echo 'A working environment named ADNI has been created or updated.'
echo 'If working on the cadillac server please `module load Anaconda/4.2.0`.'
echo 'Then run `source activate ADNI` to work within the environment.'
echo ' '
I hope this can help anyone in the future who may have similar issues.
The command
conda create -n ADNI
creates an environment with no packages installed, not even Python or pip. Therefore, despite activating the environment, you are still using some other pip that appears on your PATH. You need to install pip or Python into the environment first, either when the environment is created or afterwards with the conda install command
conda create -n ADNI python=3.6
will install Python, which brings along pip when the environment is created or
conda create -n ADNI
conda install -n ADNI python=3.6
will install Python afterwards.
In the best case, you would use conda to install that package. It isn't all that difficult to create a conda package from a pip package and upload it to a channel on Anaconda.org so your team can access it.