I'm using linux mint 14, and installed virtualenv via apt-get:
$ sudo apt-get install python-virtualenv
$ virtualenv --version
> 1.7.1.2
The problem is that it's an old version. When I install it via PIP, it installs the version I want, but it is only acessible via root user:
$ pip install virtualenv --upgrade (fails)
> ...OSError: [Errno 13] Permission denied: '/usr/bin/virtualenv'
$ sudo pip install virtualenv
$ virtualenv --version
> bash: /usr/bin/virtualenv: No such file or directory
$ sudo virtualenv --version
> 1.8.4
Any hint?
You need to check the permissions for the file:
ls -alrt /usr/bin/virtualenv
The file needs to have execute permission for user, group and all.
Related
I am using EC2 Ubuntu 18.04 VM.
Due to CVE-2021-3177, Python needs to be upgraded to the latest version of Python3.9 which would be 3.9.9 currently.
I did that using the deadsnakes option as per the steps mentioned below:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install python3.9
sudo apt-get update
sudo apt upgrade -y
The above ensures that Python3.9.9 is now available. But now python3.6 & python3.9 is available. So next we will use the update-alternatives command to make python3.9 as the default version.
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2
Now that alternatives are defined, we will switch to Option 2 as the default option i.e. Python3.9
sudo update-alternatives --config python3
Once done, the following command would point to the latest version.
sudo python3 -V
However, if you use the sudo apt update command, you will see an error stating that
Traceback (most recent call last):
File "/usr/lib/cnf-update-db", line 8, in <module>
from CommandNotFound.db.creator import DbCreator
File "/usr/lib/python3/dist-packages/CommandNotFound/db/creator.py", line 11, in <module>
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'
Reading package lists... Done
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null; fi'
E: Sub-process returned an error code
To fix this we will have to add a link using the following command
cd /usr/lib/python3/dist-packages/
sudo ln -s apt-pkg.cpython-{36m,39m}-x86_64-linux-gnu.so
Also below is optional, I tried with and without the following commands
apt purge python3-apt
apt install python3-apt
sudo apt install python3.9-distutils python3.9-dev
Once done following command will now not result in any errors
sudo apt update
This means that the issue is fixed.
But for some reason, I cannot connect with the machine afterwards or if I create an AMI using this I cannot connect to the launched instance using PUTTY or SCP.
The same issue persists with Ubuntu-20.x too.
Appreciate your help.
After upgrading Python, there are issues with the following Python modules that cloud-init depends on, which in turn prevents EC2 from being able to correctly configure your newly booted EC2 instance using cloud-init, and which is why it is inaccessible:
setuptools
urllib3
requests
jinja2
netifaces
You can debug this issue by going to your EC2 instance in the AWS Web Console and clicking:
Actions -> Monitor and troubleshoot -> Get system log
Sometimes it takes a while to update, so click the refresh button until your logs appear. It is easier to read the logs if you download them. This is what helped me solve the issues that I was having.
The following steps resolved the issue for me on Ubuntu 18.04 LTS:
For Ubuntu 20.04 LTS, change the 36m in the symbolic links to 38.
# Add deadsnakes ppa repository
sudo add-apt-repository ppa:deadsnakes/ppa
# Install new python version
sudo apt update
sudo apt install python3.10
# Fix broken apt_inst after python upgrade
sudo ln -s /usr/lib/python3/dist-packages/apt_inst.cpython-36m-x86_64-linux-gnu.so /usr/lib/python3/dist-packages/apt_inst.so
# Fix broken apt_pkg after python upgrade
sudo ln -s /usr/lib/python3/dist-packages/apt_pkg.cpython-36m-x86_64-linux-gnu.so /usr/lib/python3/dist-packages/apt_pkg.so
# Make installed python version an alternative with a priority of 2
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 2
# Make upgraded python version an alternative with a priority of 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
# Reinstall python3-apt
sudo apt remove --purge python3-apt
sudo apt autoclean
sudo apt install python3-apt
# Install required packages
sudo apt install \
build-essential \
python3.10-distutils \
python3.10-venv \
libpython3.10-dev
# Install latest pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3.10 get-pip.py
# Upgrade outdated python libraries that break cloud-init
sudo -i
pip3 install --upgrade setuptools
pip3 install --upgrade urllib3
pip3 install --upgrade requests
pip3 install --upgrade jinja2
pip3 install --upgrade netifaces
pip3 install --upgrade --ignore-installed pyyaml
exit
# Upgrade cloud-init to latest version
sudo apt install --only-upgrade cloud-init
If you use Ansible, it is also affected by the upgrade.
Ansible can be fixed as follows:
Edit /usr/lib/python3/dist-packages/apt/package.py and change the following line:
from collections import Mapping, Sequence
to:
from collections.abc import Mapping, Sequence
It would be useful if the deadsnakes repository could provide an update for python3-apt (eg. python3.10-apt) to solve this issue.
Reference:
https://cloudbytes.dev/snippets/upgrade-python-to-latest-version-on-ubuntu-linux
So i am aiming to install Ludwig for experimentation but i didn't found any solution on the internet to this issue enter image description here
screenshot of the error message
i am using windows subsytem for linux (debian)
Your python version is probably unsupported by tensorflow 1.15.3. I ran into the same issue trying to install with python 3.8.
https://github.com/tensorflow/tensorflow/issues/34302
When I compiled Ludwig from GitHub source, there were a lot of dependencies to patch. I experienced the same error message and I gave up solving it.
I retried with clean installation from the very beginning, I managed to install Ludwig successfully on Google Cloud - Debian 9 VM.
Now I redo the steps on Oracle Cloud - Ubuntu 20.04 VM.
Steps:
Ensure the following dependencies are ready, which I consolidate from various sources.
$ sudo apt update
$ sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev curl libbz2-dev lzma liblzma-dev python3-tk
Clean install Python 3.6. I choose to use older Python 3.6 to execute Ludwig, because Ludwig is using the old Tensorflow 1.15.3. I reserve Python 3.7 and 3.8 environment for other newer Python projects, for example Tensorflow 2.
# Simply use a temporary working folder.
$ cd /tmp
# Download the newest Python3.6 source.
$ curl -O https://www.python.org/ftp/python/3.6.12/Python-3.6.12.tgz
$ tar -xzvf Python-3.6.12.tgz
$ cd Python-3.6.12
# --prefix=/usr/local ensures the newly installed Python3.6 does not mess up with the default Python executables in the OS. This is specially warned in Google Cloud.
$ ./configure --prefix=/usr/local --enable-optimizations
$ sudo make altinstall
$ python3.6 --version
Python 3.6.12
# Upgrade pip and virtualenv
$ sudo python3.6 -m pip install --upgrade pip
$ sudo python3.6 -m pip install --upgrade virtualenv
Prepare the virtual environment for Ludwig.
Reference https://ludwig-ai.github.io/ludwig-docs/getting_started/#installation.
# Go back to home.
$ cd
# Create a Working directory.
$ mkdir Works
$ cd Works
# Initialize a virtual environment with Python3.6
$ virtualenv -p python3.6 ludwig
$ source ludwig/bin/activate
# Install Ludwig
$ pip install ludwig
You can see every dependencies are taken care of and Ludwig is ready to use.
$ pip list
Package Version
-------------------- -------
... ...
ludwig 0.2.2.8
...
tensorflow 1.15.3
...
# Execute Ludwig
$ ludwig
I'm able to install it with root user but I wanted to install it in a clean environment. My use case is to test the installation of another application with pip for the customer who is using python3.7.0
sudo apt-get update
sudo apt-get install build-essential libpq-dev libssl-dev openssl libffi-dev zlib1g-dev
sudo apt-get install python3-pip python3-dev
sudo apt-get install python3.7
Thanks.
(assuming python3.7 is installed)
Install virtualenv package:
pip3.7 install virtualenv
Create new environment:
python3.7 -m virtualenv MyEnv
Activate environment:
source MyEnv/bin/activate
To help anyone else who runs into the chicken & egg situation trying to use the above chosen answer, here's what solved it for me:
sudo apt install python3.7-venv
python3.7 -m venv env37
source env37/bin/activate
deactivate (when done using the environment)
I had installed python 3.7 using deadsnakes vs source:
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3.7
In doing so I could run python3.7 --version but since I had no pip3.7 I could not install virtualenv as directed in the solution above. Luck would have it that deadsnakes has venv! Once I installed venv I could create my environment & be on my merry way
Handy official python page with venv info
So why didn't I use?:
python3.7 -m ensurepip
That was giving me:
ERROR: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/usr/local/lib/python3.7/dist-packages/easy_install.py'
Consider using the --user option or check the permissions.
Which left me with 3 choices:
use sudo (which is simple but I keep being told is frowned upon)
install with --user option which wasn't ideal in that I may not always be logged in as the same user
or install it in an environment which I'm told is the recommended route.
But see chicken egg above.. How do I install pip in environment when I can't create venv or virtualenv? Thus my workaround solution of installing venv from deadsnakes which allowed me to create the virtual environment to then install pip3.7:
(env37) user#ubuntu:~$ python3.7 -m ensurepip
(env37) user#ubuntu:~$ pip3.7 --version
pip 19.2.3 from /home/user/env37/lib/python3.7/site-packages/pip (python 3.7)
Some added information, if you are trying for some version like python 3.7.10, which might give following error upon executing pip3.7.10 install virtualenv
.pyenv/versions/3.7.10/bin/python: No module named virtualenv
So, in a general sense you can do the following steps:
[commands are specific to MacOs, I am currently using with the new M1 chip]
After installing 3.7.10 using pyenv, make it global.
brew update
brew install pyenv
set environment variables
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
echo 'eval "$(pyenv init -)"' >> ~/.bash_profile
source ~/.bash_profile
look at the pyenv list to see if the version you install is there or not and install and make it global
pyenv install --list
pyenv install 3.7.10
pyenv global 3.7.10
create your virtual environment now with this version
python -m venv MyEnv
activate it
source MyEnv/bin/activate
Using pip on windows, you can do the following:
1.virtualenv --python "C:\\Python37\\python.exe" venv# use your own path
You will see something like this:
Running virtualenv with interpreter C:\Python37\python.exe
Using base prefix 'C:\Python37'
New python executable in C:\Users\XXXX\Documents\GitHub\MyProject\venv\Scripts\python.exe
Installing setuptools, pip, wheel...
done.
2.C:\Users\XXXXX\Documents\GitHub\MyProject>cd venv
C:\Users\XXXXX\Documents\GitHub\MyProject\venv>cd Scripts
C:\Users\XXXXX\Documents\GitHub\MyProject\venv\Scripts>activate.
At the beginning of the command path, when you see (environment variable name) in this case (venv), this is a sign that your virtual environment is activated.
(venv) C:\Users\tuscar2001\Documents\GitHub\MyProject\venv\Scripts>
Please check the following link for more details:http://www.datasciencetopics.com/2020/03/how-to-set-up-virtual-environment-in.html
Figure out python3.7 path on your system. For mac with python3.7 in brew you can use the following
virtualenv env -p /usr/local/opt/python#3.7/bin/python3
source ./env/bin/activate
I am trying to set up a skill using Shapely on Lambda. I got the error
module initialization error: Could not find lib geos_c or load any of its variants ['libgeos_c.so.1', 'libgeos_c.so'].
There's a similar question for Python 2.7. I can't use the lambda-packs by ryfeus cause I'm on Python 3.6, but I figured the EC2 approach described by Graeme should work.
So I started up an EC2 instance using the Public Amazon Linux AMI version from the AWS docs
I then ran these commands
$ sudo yum -y update
$ sudo yum -y install python36 python36-virtualenv python36-pip
$ mkdir ~/forlambda
$ cd ~/forlambda
$ virtualenv -p python3 venv
$ source venv/bin/activate
and then installed Shapely and a few other packages I needed.
$ sudo yum -y groupinstall "Development Tools"
$ pip install python-dateutil
$ pip install shapely
$ pip install pyproj
$ pip install pyshp
I then ran my skill (on the EC2 instance), and it works! So then I copied the files at venv/lib/python3.6/site-packages, plus the myskill.py and zipped them up, uploaded to Lambda, and still get the geos_c error as shown above :(
I have been able to upload a scaled-down version of my skill (minus Shapely, but including other packages that don't come with Lambda) and it works on Lambda, so I don't think it's an error on how I am zipping or uploading.
Am I missing something? Does it make a difference that the Development Tools were installed using "sudo yum install" instead of "pip install"?
For some reason, the pip install of Shapely and Pyproj didn't end up in the virtualenv site-packages. From a fresh EC2 instance, I ran these commands:
$ sudo yum -y update
$ sudo yum -y install python36 python36-virtualenv python36-pip
$ mkdir ~/forlambda
$ cd ~/forlambda
$ virtualenv -p python3 venv
$ source venv/bin/activate
(venv) $ sudo yum -y groupinstall "Development Tools"
(venv) $ pip install python-dateutil
(venv) $ pip install shapely -t ~/forlambda/venv/lib/python3.6/site-packages/
(venv) $ pip install pyproj -t ~/forlambda/venv/lib/python3.6/site-packages/
(venv) $ pip install pyshp
and then zipped up all the contents of site-packages/ plus myskill.py, uploaded to Lambda, and it worked.
My MacBook didn't have virtualenv anywhere (because I uninstalled and rm everything).
I then installed virtualenv by
sudo pip install virtualenv
and I guess that installed virtualenv to /usr/local/bin because when I ran:
$ which virtualenv
/usr/local/bin/virtualenv
But when I want to use virtualenv to create a new virtual environment, I got this:
$ virtualenv venv
-bash: /usr/local/share/python/virtualenv: No such file or directory
Why is it looking for virtualenv in /usr/local/share?
I see what's going on. virtualenv is installed:
/usr/local/bin/virtualenv
But it's being referenced here:
/usr/local/share/python/virtualenv
You could add a link
cd /usr/local/share/python/ && ln -s /usr/local/bin/virtualenv