I work with Python Poetry and need to automate the process. Specifically, everytime I need to go to a directory with
cd /work/directory
And spawn a poetry shell with
poetry shell
and again go to another directory
cd /other/directory
to finish the work.
I would love to automate this with a small script like
#!/bin/bash
cd /work/directory
poetry shell
cd /other/directory
# do work
However whenever I run this script, I get stucks at poetry shell. Is there any option like bash's -c so that I can do following?
poetry shell -c "cd /other/directory && do work"
Probably you want to run a python script using the poetry virtual environment.
In that case, I think you should be using simple the command
poetry run {your_command}
For a more practical example you can run a python script as follows
poetry run python myscript.py
which will use the poetry virtual environment you already created with poetry install.
Remix from other answers.
poetry shell spawns an environment, but we can also create an environment manualy using the env use command.
# This will create a Python3.7 env
poetry env use 3.7
Why use env use vs shell?
env use is a pure method that has an output and ends the process. It also activates the environment as required.
So after env completes we can actually use #DaveR's answer and use the run command.
A chained version of this my_bash_script.sh script would be:
#!/bin/bash
# Activate / Create + activate my enviroment
poetry env use 3.7
# Run my script in the above environment
poetry run python3.7 my_script.py
Extra
env command is helpful to see what envs you have activated.
poetry env list
ran into the same problem. this was posted by someone in the poetry discord server:
I am not sure what running poetry shell exactly entails in all details. But...
Maybe it is good enough for you to just activate the virtual environment.
And if what you want to do is not interactive work on the command line, then activating the environment might not even be necessary.
Typically you would first need to identify the path to the virtual environment with something like poetry env info --path. Once you know that you can "activate it" from anywhere:
. /path/to/venv/bin/activate
Or (in most cases) you can just use any of the executables from that virtual environment directly without having to activate first:
/path/to/venv/bin/python path/to/script.py
Related
I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?
You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.
There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]
Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python
Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.
If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"
Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...
I recently got a new Mac with M1 processor. From the terminal I installed miniconda through home-brew. Then, I tried to create a new conda environment as usual using conda create --name my_env. Then, I try to activate the environment using conda activate my_env, but I get the following error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I obey of course, and I run conda init bash and close and restart my shell. However, when I then try to activate my environment I receive the above error again (and rerunning conda init bash does not fix the problem because it simply says 'no action taken'.
Questions:
Does anyone know why I still can't run conda activate?
Is it smart to even use miniconda for Python without Rosetta?
Many thanks
Mac changed the shell from Bash to Zsh a couple years ago. So, you need to do
conda init zsh
I want to create a bash script to install a new virtual environment "ABC" in conda. But before I go ahead and run a command to create this env, I want to check if conda is already installed on the machine. If not installed, I want to install miniconda and then create the env "ABC". If conda is already installed then I would just go ahead and create the environment. (All this should happen within the same script)
I just want to know if it is possible to check the existence of conda within a bash script and then proceed with the rest of the installations?
'''
#!/bin/bash
<code_to_check_existence_of_conda_env_here ?>
//If it does not exist, I will run the below code
mkdir -p miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O
miniconda3/miniconda.sh
bash miniconda3/miniconda.sh -b -u -p ~/miniconda3
conda env update -f my_env.yml
'''
The main issue here is that Conda has multiple components. Which components are loaded depends on the how Conda is installed and what user the BASH script is executing under. I'll try describing the components and hopefully you can decide what is suitable to verify.
Conda Components
1: Conda Package
The first is a Python package conda, which is installed in the Anaconda/Miniconda/Miniforge base environment. If the base environment is activated, one could test
python -m conda
which will give something like
/path/to/python: No module named conda
if it isn't there. Otherwise, it outputs the conda entrypoint's documentation.
2: Conda Entrypoint
The entrypoint conda, which acts as a CLI, is located under the condabin directory of the Anaconda/Miniconda/Miniforge installation. When a user runs conda init, a managed section is added to their shell initialization file (.bashrc for Linux BASH), that includes code to add the condabin to PATH. This is most likely what OP wants to identify, however, running with the shebang /bin/bash will not load the .bashrc file. Instead, one should probably be using
#!/bin/bash -l
or
#!/usr/bin/env bash -l
Then the entrypoint can be located with
which conda
3: Conda Activate
Finally, Conda also includes some shell-only functions, which are defined in the aforementioned shell initialization code. This sets up a middleman shell function, also called conda (essentially an alias), which can be viewed with
type conda
This function serves to determine whether the conda (de)?activate commands are being requested, which are pure shell functions, or something that needs to be forwarded to the entrypoint.
Recommendation
Were I designing this, I would write an interactive script that checks for #2 (which conda) and if that comes up blank then prompt the user to either provide the PATH to the Conda entrypoint (maybe they installed it in a weird place or didn't run conda init) or install Miniforge.1
I also would not use the base environment to install stuff - that is a bad idea for an end-user, let alone a third-party - and instead create a new environment. I would prompt the user with a specific default environment name, but also provide an option for them to customize.
[1] Yes, Miniforge, not Miniconda. Commercial use of the Anaconda defaults channels now requires a paid license, so better to use the free Miniforge.
I'm not really good at bash scripting but I would execute a command:
pip3 freeze | grep conda
and based on the output (if exit status was an error or not) install it or directly create the environment.
run the command conda list on your machine, the output will of be different when its installed vs not installed, then run if statement on the output.
In bash, if we want to check if a software is installed or not, the following check is used.
if dpkg -l $SOFTWARE; then
<Do stuff>
ensureconda
This sounds like the intended problem that the experimental tool ensureconda was designed to solve. This, however, requires a Python installation with pip:
pip install ensureconda
Here are the command options:
$ ensureconda --help
Usage: ensureconda [OPTIONS]
Ensures that a conda/mamba is installed.
Options:
--mamba / --no-mamba search for mamba
--micromamba / --no-micromamba search for micromamba, install if not
present
--conda / --no-conda search for conda
--conda-exe / --no-conda-exe search for conda.exe / conda-standalone,
install if not present
--no-install don't install conda/mamba if no version can
be discovered
--min-conda-version VERSIONNUMBER
minimum version of conda to accept (defaults
to 4.8.2)
--min-mamba-version VERSIONNUMBER
minimum version of mamba/micromamba to
accept (defaults to 0.7.3)
--help Show this message and exit.
Type conda activate
or conda activate env_name
The first command will activate the base environment if it is already installed. The second command will do activate the user created environment if it is installed.
I want to install miniconda in databricks environment
I run the following code:
%sh
/dbfs/FileStore/Miniconda3_latest_Linux_x86_64.sh
But I can't interact with command line, when it asks me to press Enter.
How do I pass Enter in databricks notebook ?
You can try installing Miniconda in silent mode, which will not prompt you to press any keys to accept the license, etc.
%sh
/dbfs/FileStore/Miniconda3_latest_Linux_x86_64.sh -b
You'll need to follow the other instructions in the link in order to set up the conda environment properly. Maybe something like
%sh
eval "$(/path/to/miniconda/bin/conda shell.bash hook)"
conda init
I use virtualenvwrapper from apt.
It's working OK with bash but I recently switched to zsh.
Now when I try workon in zsh I get zsh: command not found: workon
Because I'm using oh-my-zsh script/plugins I thought it will be sufficient to add virtualenv and virtualenvwrapper plugins to my .zshrc plugins=.
But it did not help. What else I need to configure to make it work under zsh?
PS to be clear - I still can use bash for this - nothing broken here...
I just test it on ubuntu 14.04 and i had the same problem.
To fix it add this to your .zshrc
source /usr/share/virtualenvwrapper/virtualenvwrapper.sh
or run this in terminal
echo source /usr/share/virtualenvwrapper/virtualenvwrapper.sh >> ~/.zshrc