pip with venv shows many modules installed - python-3.x

I built Python 3.3.0 from source on my Ubuntu 13.10 laptop.
When using the /usr/bin/virtualenv -p /python3.3.0/bin/python3 foo_virt command to create a virtual environemnt, I see no modules installed when running pip freeze, which is the behavior I expect.
When using /python3.3.0/bin/python3 -m venv foo_virt, I see tons of modules installed:
(foo_virt) user#laptop:/foo_virt$ /usr/bin/pip freeze --local
Jinja2==2.7
Mako==0.8.1
MarkupSafe==0.15
PAM==0.4.2
Pillow==2.0.0
Pygments==1.6
SecretStorage==1.0.0
... (total of 75 modules listed)
I tried then to install pip for that specific version of Python, by running, as per the module's documentation: python3 get-pip.py. But I still see all these modules:
(foo_virt) user#laptop:/foo_virt$ which pip
/foo_virt/bin/pip
(foo_virt) user#laptop:/foo_virt$ pip freeze --local
Jinja2==2.7
Mako==0.8.1
MarkupSafe==0.15
PAM==0.4.2
Pillow==2.0.0
Pygments==1.6
SecretStorage==1.0.0
... (still 75 modules)
How do I use venv so no modules are installed in the virtual environment? I didn't find any option in the documentation to help me. Also, this issue is not happening on Windows 7. Thanks!

bash caches commands found by searching the PATH. You can see the current cache by entering hash. Adding -r resets the cache. -d will delete an individual name. Sourcing the activate script should reset the cache:
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then
hash -r
fi
Maybe you ran the system pip before get-pip.py. In that case hash -d pip solves the problem.

Related

How to find out if conda is already available on a machine within a bash script?

I want to create a bash script to install a new virtual environment "ABC" in conda. But before I go ahead and run a command to create this env, I want to check if conda is already installed on the machine. If not installed, I want to install miniconda and then create the env "ABC". If conda is already installed then I would just go ahead and create the environment. (All this should happen within the same script)
I just want to know if it is possible to check the existence of conda within a bash script and then proceed with the rest of the installations?
'''
#!/bin/bash
<code_to_check_existence_of_conda_env_here ?>
//If it does not exist, I will run the below code
mkdir -p miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O
miniconda3/miniconda.sh
bash miniconda3/miniconda.sh -b -u -p ~/miniconda3
conda env update -f my_env.yml
'''
The main issue here is that Conda has multiple components. Which components are loaded depends on the how Conda is installed and what user the BASH script is executing under. I'll try describing the components and hopefully you can decide what is suitable to verify.
Conda Components
1: Conda Package
The first is a Python package conda, which is installed in the Anaconda/Miniconda/Miniforge base environment. If the base environment is activated, one could test
python -m conda
which will give something like
/path/to/python: No module named conda
if it isn't there. Otherwise, it outputs the conda entrypoint's documentation.
2: Conda Entrypoint
The entrypoint conda, which acts as a CLI, is located under the condabin directory of the Anaconda/Miniconda/Miniforge installation. When a user runs conda init, a managed section is added to their shell initialization file (.bashrc for Linux BASH), that includes code to add the condabin to PATH. This is most likely what OP wants to identify, however, running with the shebang /bin/bash will not load the .bashrc file. Instead, one should probably be using
#!/bin/bash -l
or
#!/usr/bin/env bash -l
Then the entrypoint can be located with
which conda
3: Conda Activate
Finally, Conda also includes some shell-only functions, which are defined in the aforementioned shell initialization code. This sets up a middleman shell function, also called conda (essentially an alias), which can be viewed with
type conda
This function serves to determine whether the conda (de)?activate commands are being requested, which are pure shell functions, or something that needs to be forwarded to the entrypoint.
Recommendation
Were I designing this, I would write an interactive script that checks for #2 (which conda) and if that comes up blank then prompt the user to either provide the PATH to the Conda entrypoint (maybe they installed it in a weird place or didn't run conda init) or install Miniforge.1
I also would not use the base environment to install stuff - that is a bad idea for an end-user, let alone a third-party - and instead create a new environment. I would prompt the user with a specific default environment name, but also provide an option for them to customize.
[1] Yes, Miniforge, not Miniconda. Commercial use of the Anaconda defaults channels now requires a paid license, so better to use the free Miniforge.
I'm not really good at bash scripting but I would execute a command:
pip3 freeze | grep conda
and based on the output (if exit status was an error or not) install it or directly create the environment.
run the command conda list on your machine, the output will of be different when its installed vs not installed, then run if statement on the output.
In bash, if we want to check if a software is installed or not, the following check is used.
if dpkg -l $SOFTWARE; then
<Do stuff>
ensureconda
This sounds like the intended problem that the experimental tool ensureconda was designed to solve. This, however, requires a Python installation with pip:
pip install ensureconda
Here are the command options:
$ ensureconda --help
Usage: ensureconda [OPTIONS]
Ensures that a conda/mamba is installed.
Options:
--mamba / --no-mamba search for mamba
--micromamba / --no-micromamba search for micromamba, install if not
present
--conda / --no-conda search for conda
--conda-exe / --no-conda-exe search for conda.exe / conda-standalone,
install if not present
--no-install don't install conda/mamba if no version can
be discovered
--min-conda-version VERSIONNUMBER
minimum version of conda to accept (defaults
to 4.8.2)
--min-mamba-version VERSIONNUMBER
minimum version of mamba/micromamba to
accept (defaults to 0.7.3)
--help Show this message and exit.
Type conda activate
or conda activate env_name
The first command will activate the base environment if it is already installed. The second command will do activate the user created environment if it is installed.

UWSGI error with PCRE Ubuntu 20.04 error while loading shared libraries: libpcre.so.1:

I run through the following steps to attempt to start up an app for production:
-Setup a virtualenv for the python dependencies: virtualenv -p /usr/bin/python3.8 ~/app_env
-Install pip dependencies: . ~/app_env/bin/activate && pip install -r ~/app/requirements.txt
-Un-comment the lines for privilege dropping in uwsgi.ini and change the uid and gid to your account name
-Login to root with sudo -s and re-source the env with source /home/usr/app_env/bin/activate
-Set the courthouse to production mode by setting the environment variable with export PRODUCTION=1
-Start the app: cd /home/usr/app && ./start_script.sh
And I get the following error:
(app_env) root#usr-Spin-SP314-53N:/home/usr/Desktop/app# ./start.sh
uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
I tried a few things such as installing a newer libpcre version like mentioned here, tried also the steps mentioned here but that didn't work. Also the environment I'm setting up doesn't use anaconda but regular python. I even tried pip install uwsgi in my virtual env but it said the requirement was already satisfied. I'm not much of an expert when it comes to somewhat complex package management like this, help with how to solve this would be greatly appreciated. Thanks. I'm on Ubuntu 20.04, using python 3.8.
What solved it for me was apparently just reinstalling UWSGI, like in this thread, in my virtual env while forcing it to ignore the cache so it could know to use the pcre library I installed.
In order, doing this
uwsgi --version
Was giving me this
uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
So I made sure I had the latest libpcre installed
sudo apt-get install libpcre3-dev
And then what linked it all together was this
pip install uwsgi -I --no-cache-dir
I tried to solve this error but it did not work no matter what I did and then reinstalled uwsgi, but the following 2 lines solved my problem
sudo find / -name libpcre.so.*
#change the path of the /home/anaconda3/lib/libpcre.so.1 with the one appears after above one.
sudo ln -s /home/anaconda3/lib/libpcre.so.1 /lib
which python

how include a python package in my current project direct under linux? [duplicate]

I know the obvious answer is to use virtualenv and virtualenvwrapper, but for various reasons I can't/don't want to do that.
So how do I modify the command
pip install package_name
to make pip install the package somewhere other than the default site-packages?
The --target switch is the thing you're looking for:
pip install --target=d:\somewhere\other\than\the\default package_name
But you still need to add d:\somewhere\other\than\the\default to PYTHONPATH to actually use them from that location.
-t, --target <dir>
Install packages into <dir>. By default this will not replace existing files/folders in <dir>.
Use --upgrade to replace existing packages in <dir> with new versions.
Upgrade pip if target switch is not available:
On Linux or OS X:
pip install -U pip
On Windows (this works around an issue):
python -m pip install -U pip
Use:
pip install --install-option="--prefix=$PREFIX_PATH" package_name
You might also want to use --ignore-installed to force all dependencies to be reinstalled using this new prefix. You can use --install-option to multiple times to add any of the options you can use with python setup.py install (--prefix is probably what you want, but there are a bunch more options you could use).
Instead of the --target or --install-options options, I have found that setting the PYTHONUSERBASE environment variable works well (from discussion on a bug regarding this very thing):
PYTHONUSERBASE=/path/to/install/to pip install --user
(Or set the PYTHONUSERBASE directory in your environment before running the command, using export PYTHONUSERBASE=/path/to/install/to)
This uses the very useful --user option but tells it to make the bin, lib, share and other directories you'd expect under a custom prefix rather than $HOME/.local.
Then you can add this to your PATH, PYTHONPATH and other variables as you would a normal installation directory.
Note that you may also need to specify the --upgrade and --ignore-installed options if any packages upon which this depends require newer versions to be installed in the PYTHONUSERBASE directory, to override the system-provided versions.
A full example
PYTHONUSERBASE=/opt/mysterypackage-1.0/python-deps pip install --user --upgrade numpy scipy
..to install the scipy and numpy package most recent versions into a directory which you can then include in your PYTHONPATH like so (using bash and for python 2.6 on CentOS 6 for this example):
export PYTHONPATH=/opt/mysterypackage-1.0/python-deps/lib64/python2.6/site-packages:$PYTHONPATH
export PATH=/opt/mysterypackage-1.0/python-deps/bin:$PATH
Using virtualenv is still a better and neater solution!
To pip install a library exactly where I wanted it, I navigated to the location I wanted the directory with the terminal then used
pip install mylibraryName -t .
the logic of which I took from this page: https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/download
Installing a Python package often only includes some pure Python files. If the package includes data, scripts and or executables, these are installed in different directories from the pure Python files.
Assuming your package has no data/scripts/executables, and that you want your Python files to go into /python/packages/package_name (and not some subdirectory a few levels below /python/packages as when using --prefix), you can use the one time command:
pip install --install-option="--install-purelib=/python/packages" package_name
If you want all (or most) of your packages to go there, you can edit your ~/.pip/pip.conf to include:
[install]
install-option=--install-purelib=/python/packages
That way you can't forget about having to specify it again and again.
Any excecutables/data/scripts included in the package will still go to their default places unless you specify addition install options (--prefix/--install-data/--install-scripts, etc., for details look at the custom installation options).
Tested these options with python3.5 and pip 9.0.3:
pip install --target /myfolder [packages]
Installs ALL packages including dependencies under /myfolder. Does not take into account that dependent packages are already installed elsewhere in Python. You will find packages from /myfolder/[package_name]. In case you have multiple Python versions, this doesn't take that into account (no Python version in package folder name).
pip install --prefix /myfolder [packages]
Checks if dependencies are already installed. Will install packages into /myfolder/lib/python3.5/site-packages/[packages]
pip install --root /myfolder [packages]
Checks dependencies like --prefix but install location will be /myfolder/usr/local/lib/python3.5/site-packages/[package_name].
pip install --user [packages]
Will install packages into $HOME:
/home/[USER]/.local/lib/python3.5/site-packages
Python searches automatically from this .local path so you don't need to put it to your PYTHONPATH.
=> In most of the cases --user is the best option to use.
In case home folder can't be used because of some reason then --prefix.
pip3 install "package_name" -t "target_dir"
source - https://pip.pypa.io/en/stable/reference/pip_install/
-t switch = target
Nobody seems to have mentioned the -t option but that the easiest:
pip install -t <direct directory> <package>
pip install packageName -t pathOfDirectory
or
pip install packageName --target pathOfDirectorty
Just add one point to #Ian Bicking's answer:
Using the --user option to specify the installed directory also work if one wants to install some Python package into one's home directory (without sudo user right) on remote server.
E.g.,
pip install --user python-memcached
The command will install the package into one of the directories that listed in your PYTHONPATH.
Newer versions of pip (8 or later) can directly use the --prefix option:
pip install --prefix=$PREFIX_PATH package_name
where $PREFIX_PATH is the installation prefix where lib, bin and other top-level folders are placed.
To add to the already good advice, as I had an issue installing IPython when I didn't have write permissions to /usr/local.
pip uses distutils to do its install and this thread discusses how that can cause a problem as it relies on the sys.prefix setting.
My issue happened when the IPython install tried to write to '/usr/local/share/man/man1' with Permission denied. As the install failed it didn't seem to write the IPython files in the bin directory.
Using "--user" worked and the files were written to ~/.local. Adding ~/.local/bin to the $PATH meant I could use "ipython" from there.
However I'm trying to install this for a number of users and had been given write permission to the /usr/local/lib/python2.7 directory. I created a "bin" directory under there and set directives for distutils:
vim ~/.pydistutils.cfg
[install]
install-data=/usr/local/lib/python2.7
install-scripts=/usr/local/lib/python2.7/bin
then (-I is used to force the install despite previous failures/.local install):
pip install -I ipython
Then I added /usr/local/lib/python2.7/bin to $PATH.
I thought I'd include this in case anyone else has similar issues on a machine they don't have sudo access to.
If you are using brew with python, unfortunately, pip/pip3 ships with very limited options. You do not have --install-option, --target, --user options as mentioned above.
Note on pip install --user
The normal pip install --user is disabled for brewed Python. This is because of a bug in distutils, because Homebrew writes a distutils.cfg which sets the package prefix.
A possible workaround (which puts executable scripts in ~/Library/Python/./bin) is:
python -m pip install --user --install-option="--prefix=" <package-name>
You might find this line very cumbersome. I suggest use pyenv for management.
If you are using
brew upgrade python python3
Ironically you are actually downgrade pip functionality.
(I post this answer, simply because pip in my mac osx does not have --target option, and I have spent hours fixing it)
With pip v1.5.6 on Python v2.7.3 (GNU/Linux), option --root allows to specify a global installation prefix, (apparently) irrespective of specific package's options. Try f.i.,
$ pip install --root=/alternative/prefix/path package_name
I suggest to follow the documentation and create ~/.pip/pip.conf file. Note in the documentation there are missing specified header directory, which leads to following error:
error: install-base or install-platbase supplied, but installation scheme is incomplete
The full working content of conf file is:
[install]
install-base=$HOME
install-purelib=python/lib
install-platlib=python/lib.$PLAT
install-scripts=python/scripts
install-headers=python/include
install-data=python/data
Unfortunatelly I can install, but when try to uninstall pip tells me there is no such package for uninstallation process.... so something is still wrong but the package goes to its predefined location.
pip install /path/to/package/
is now possible.
The difference with this and using the -e or --editable flag is that -e links to where the package is saved (i.e. your downloads folder), rather than installing it into your python path.
This means if you delete/move the package to another folder, you won't be able to use it.
system` option, that will install pip package-bins to /usr/local/bin thats accessible to all users. Installing without this option may not work for all users as things go to user specific dir like $HOME/.local/bin and then it is user specific install which has to be repeated for all users, also there can be path issues if not set for users, then bins won't work. So if you are looking for all users - yu need to have sudo access:
sudo su -
python3 -m pip install --system <module>
logout
log back in
which <module-bin> --> it should be installed on /usr/local/bin/
Sometimes it works only works with Cache argument
-m pip install -U pip --target=C:\xxx\python\lib\site-packages Pillow --cache-dir C:\tmp

virtualenv not found in path

For some reason virtualenv is not in my path after installing with pip3. I have a fresh install of ubuntu 16.04.
sudo apt-get install pip3
pip3 install virtualenv
virtualenv # command not found!!!
edit: I also installed jupyter notebook with pip3 and its not in the path either.
Python executables are placed in ~/.local/bin/ on Ubuntu 16.04.
This location is not in $PATH so edit your .bashrc to append it there.
# .bashrc file
export PATH=$PATH:~/.local/bin
This is Ubuntu only (didn't check other distros)
TL;DR (if you used pip to install pkg) run following command
$ source ~/.profile
If you examine .profile there is a script looks like following.
(18 version. 16 version has something different)
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
PATH="$HOME/.local/bin:$PATH"
fi
which means everything under ~/.local/bin will be added to PATH.
So, if you used pip to install pkg and try to run from prompt. As long as pip created file under the folder it will let you run commands without full path.
You can restart the session too. Whichever you feel comfortable with.

How do I remove/delete a virtualenv?

I created an environment with the following command: virtualenv venv --distribute
I cannot remove it with the following command: rmvirtualenv venv -
This is part of virtualenvwrapper as mentioned in answer below for virtualenvwrapper
I do an lson my current directory and I still see venv
The only way I can remove it seems to be: sudo rm -rf venv
Note that the environment is not active. I'm running Ubuntu 11.10. Any ideas? I've tried rebooting my system to no avail.
"The only way I can remove it seems to be: sudo rm -rf venv"
That's it! There is no command for deleting your virtual environment. Simply deactivate it and rid your application of its artifacts by recursively removing it.
Note that this is the same regardless of what kind of virtual environment you are using. virtualenv, venv, Anaconda environment, pyenv, pipenv are all based the same principle here.
Just to echo what #skytreader had previously commented, rmvirtualenv is a command provided by virtualenvwrapper, not virtualenv. Maybe you didn't have virtualenvwrapper installed?
See VirtualEnvWrapper Command Reference for more details.
Use rmvirtualenv
Remove an environment, in the $WORKON_HOME.
Syntax:
rmvirtualenv ENVNAME
You must use deactivate before removing the current environment.
$ rmvirtualenv my_env
Reference: http://virtualenvwrapper.readthedocs.io/en/latest/command_ref.html
You can remove all the dependencies by recursively uninstalling all of them and then delete the venv.
Edit including Isaac Turner commentary
source venv/bin/activate
pip freeze > requirements.txt
pip uninstall -r requirements.txt -y
deactivate
rm -r venv/
If you are using pyenv, it is possible to delete your virtual environment:
$ pyenv virtualenv-delete <name>
Simply delete the virtual environment from the system:
rm -rf venv
(There's no special command for it)
from virtualenv's official document https://virtualenv.pypa.io/en/latest/user_guide.html
Removing an Environment
Removing a virtual environment is simply done by deactivating it and deleting the environment folder with all its contents:
(ENV)$ deactivate
$ rm -r /path/to/ENV
1. Remove the Python environment
There is no command to remove a virtualenv so you need to do that by hand, you will need to deactivate if you have it on and remove the folder:
deactivate
rm -rf <env path>
2. Create an env. with another Python version
When you create an environment the python uses the current version by default, so if you want another one you will need to specify at the moment you are creating it. To make and env. with Python 3.X called MyEnv just type:
python3.X -m venv MyEnv
Now to make with Python 2.X use virtualenv instead of venv:
python2.X -m virtualenv MyEnv
3. List all Python versions on my machine
If any of the previous lines of code didn't worked you probably don't have the specific version installed. First list all your versions with:
ls -ls /usr/bin/python*
If you didn't find it, install Python 3.X using apt-get:
sudo apt-get install python3.X
I used pyenv uninstall my_virt_env_name to delete the virual environment.
Note: I'm using pyenv-virtualenv installed through the install script.
The following command works for me.
rm -rf /path/to/virtualenv
If you are a Windows user and you are using conda to manage the environment in Anaconda prompt, you can do the following:
Make sure you deactivate the virtual environment or restart Anaconda Prompt. Use the following command to remove virtual environment:
$ conda env remove --name $MyEnvironmentName
Alternatively, you can go to the
C:\Users\USERNAME\AppData\Local\Continuum\anaconda3\envs\MYENVIRONMENTNAME
(that's the default file path) and delete the folder manually.
Actually requires two deletions.
The project folder which everyone in this thread already said you simply delete manually or using rm -r projectfoldername
But then you also need to delete the actual virtualenv located in macOS /Users/edison/.pyenv/versions/3.8.0/envs/myspecialenv.
You can do that by doing pyenv virtualenv-delete myspecialenv or manual removal.
if you are windows user, then it's in C:\Users\your_user_name\Envs. You can delete it from there.
Also try in command prompt rmvirtualenv environment name.
I tried with command prompt so it said deleted but it was still existed. So i manually delete it.
cd \environmentfolder_name\Scripts\deactivate.bat
deactivate is the command you are looking for. Like what has already been said, there is no command for deleting your virtual environment. Simply deactivate it!
If you're a windows user, you can also delete the environment by going to: C:/Users/username/Anaconda3/envs Here you can see a list of virtual environment and delete the one that you no longer need.
If you are using pyenv virtualenv < https://github.com/pyenv/pyenv > to centrally manage python versions and virtual environment the solution would be
pyenv uninstall some_env
(Assuming that you have set up your bash .szh profile correctly.)
The solution to this issue is also answered here:
https://github.com/pyenv/pyenv-virtualenv/issues/17
Hope this helps 👍🏻
Just use Anaconda Navigator to remove selected env.
It is possible that some resources will be activated, making it impossible to just delete the directory. All Python processes should be stopped in advance:
pkill -9 python
rm -rf venv
You can follow these steps to remove all the files associated with virtualenv and then reinstall the virtualenv again and using it
cd {python virtualenv folder}
find {broken virtualenv}/ -type l ## to list out all the links
deactivate ## deactivate if virtualenv is active
find {broken virtualenv}/ -type l -delete ## to delete the broken links
virtualenv {broken virtualenv} --python=python3 ## recreate links to OS's python
workon {broken virtualenv} ## activate & workon the fixed virtualenv
pip3 install ... {other packages required for the project}
For the new versions do:
conda deactivate
conda env remove -n env_name
step 1: delete virtualenv virtualenvwrapper by copy and paste the following command below:
$ sudo pip uninstall virtualenv virtualenvwrapper
step 2: go to .bashrc and delete all virtualenv and virtualenvwrapper
open terminal:
$ sudo nano .bashrc
scroll down and you will see the code bellow then delete it.
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
next, source the .bashrc:
$ source ~/.bashrc
FINAL steps: without terminal/shell go to /home and find .virtualenv (I forgot the name so if your find similar to .virtualenv or .venv just delete it. That will work.

Resources