Python3.X.X Proper Setup with Virtualenv and multiple installs to /OPT/ - python-3.x

I have spent about a week trying to get python python 3.x.x setup “properly” on my system. It has been quite a battle and I'm just about there with one final obstacle I can't seem to resolve. Many forums discuss setting up Python 3.X.X on various distro's and each has different methods, goals, outcomes, errors/issues with no clear answer. By now I have put in over 100 hours and have busted/reinstalled my system with clonzilla images dozens to times. But after all that, I have captured all the steps necessary to have the ultimate python setup in this posting - minus the answer to the final obstacle which I'm hoping someone can help me with:
The end goal I'm aiming for is the “ultimate python3.x.x setup” that I define as having the following characteristics:
has clean installation/configurations of python3.x.x. that are built from source, and include multiple side-by-side python3.x.x. versions (e.g. python3.0.1, python3.2.5, python3.3.0) with preserved root permissions assigned to each folder as part of the default /opt/ directory
Each installation does not interfere with the system's default interpreter,has pip, easy install, distribute tools, and virtualenv all properly configured and working and can run in isolation with different modules via virtualenv's
each python3.x.x is compiled, installed and named clearly in the /opt/ directory (e.g. python3.3.0, python3.3.1, python3.3.2), and is configured such that when calling whatever python3.x.x from the terminal window or using that version that it does not screwup the system's default interpretor, its dependencies/packages (plenty of forums on this one)
Each python3.x.x is working in pycharm's stupidly simplistic and awesome virtualenv manager - my last hurdle
The following steps are my setup so far and it compiled from multiple forums necessary to accomplish all of the above minus the last hurdle. Two important points are 1) I'm running Linux Mint LTS 13 and 2) I have NOT ran “sudo apt-get install python3” or any similar apt-get of python3xxxxx (this is deliberate for reasons below).
These are the steps I have taken on a fresh install of Linux mint 13, and now have backed up clonezilla image as well as virtual box's which I'm now using to solve this last hurdle.
Step 1:
This mega-command will download and setup pycharm, including the program's oracle (sun) dependencies, and install everything to the /opt/ directory (i.e.the proper location). I simply accept oracle's prompts and complete pycharm's final installation prompts (e.g. accept license, trial period etc.)
Pycharm
sudo add-apt-repository ppa:webupd8team/java -y && sudo apt-get update && sudo apt-get install oracle-java7-set-default -y && sudo apt-get install oracle-java7-installer -y && wget "http://download.jetbrains.com/python/pycharm-professional-3.0.2.tar.gz" && sudo mkdir /opt/Pycharm && sudo cp pycharm-professional-3.0.2.tar.gz /opt/Pycharm/ && cd /opt/Pycharm/ && sudo tar xvfz pycharm-professional-3.0.2.tar.gz && cd pycharm-3.0.2/bin && sudo sh pycharm.sh
Step 2:
This single command will download, extract, move, compile, and install 3.3.0, with all necessary prior dependencies, and place python3.3.0 in the /opt/ directory (the proper location).
Python3.3.0
sudo apt-get install build-essential libbz2-dev bzip2 zlib1g-dev sqlite3 libsqlite3-dev -y && wget http://python.org/ftp/python/3.3.0/Python-3.3.0.tgz && tar xvfz Python-3.3.0.tgz && cd Python-3.3.0 && ./configure --prefix=/opt/python3.3.0 && make && sudo make install
Step 3:
This single command will download, extract, move, compile, and install 3.2.5, with all necessary dependencies, and place python3.2.5 in the /opt/ directory.
Python3.2.5
wget http://www.python.org/ftp/python/3.2.5/Python-3.2.5.tgz && tar xvfz Python-3.2.5.tgz && cd Python-3.2.5 && ./configure --prefix=/opt/python3.2.5 && make && sudo make install
We now have pycharm and two side-by-side installations of python3.3.0 and python3.2.5 that are built from source, installed in the /opt/directory, and will not interfere with the system's python2.x.x interpretor or its dependencies/packages. Good so far as this is a very clean setup... Now comes the final hurdle.
If I (or you) “sudo apt-get install python3-dev” from this point, including a few other commands to setup and activate a virtualenv of python3.x.x, everything appears to work. Meaning you can setup multiple Python3.x.x virtualenv's and run them with pycharm, eclipse or from a terminal windows as either virtualenv's or non-virtualenv's. Pycharm makes it stupidly easy to manage virtually any configuration you want with its built-in virtualenv manager. The problem though is that doing “sudo apt-get install python3-dev” defeats the whole purpose of keeping python3.x.x as separate installations and runs the risk of 1) breaking python2.x.x packages, 2) installs pip packages meant for python3.x.x. into python2.x.x directories, 3) limits the ability of the user to only python3.2 and lower because you have to point whatever virtualenv you're using to the interpreter that came with running “sudo apt-get install python3-dev”, 4) a plethora of other problems scattered throughout forums I have investigated this this week in trying to figure this all out. Therefore “sudo apt-get install python3-dev” or any other apt-get of python3.x.x is not a solution as it leads to too many issues.
At this point I have a master version of a virtual box image setup with all the above steps completed in which I keep cloning and retrying to get the compiled interpretorls from /opt/ to function without doing a “sudo apt-get python3.xxx”. The 'key problem' indicated in screen shot is this issue. Nothing I do seems to allow me to point it to /opt/python3.xx/bin/pythonX interpretor whether using an IDE like pycharm, eclipse or by terminal. As soon as I run “sudo apt-get python3.xxx” it will work – but of course inherit all the other nightmare that people scream about in forums when they go down that route. Any help is greatly appreciated...
screenshot http://www.pasteall.org/pic/show.php?id=65653
Every configuration I have tried in getting the interpretor's that were compiled from source fails to allow those python3.x.x installations to function as virtualenv's and thus use package managers like pip either in a terminal window or with pycharm/eclipse. I have tried installing to home directories, changing permissions in /opt/, making system links, practically everything - everything that doesn't involve a “sudo apt-get install python3.xxxx”...This post (https://askubuntu.com/questions/406756/how-to-install-python-3-x-x-properly#406762), at step two, works but only if you revert to doing a “sudo apt-get install python3”.

What you're looking for is pyenv. It will manage your python installations, and allow you to install new versions of python without hampering other installations, it will work fine Pycharm, and will not hamper other python installations. Its done completely in bash, so it does not have python as a prerequisite installation.

I have finally figured out what I was doing wrong. I was not reading the make report and fixing additional dependencies before installing. The main problem was including dependencies for _ssl which is required for pip to work with python3.
I now have my "ultimate python setup"

Related

python3: can't install psycopg2 [duplicate]

I cannot figure this out for the life of me.
When I pip install django-tenant-schemas it tries to install the dependency psycopg2 which requires the Python headers and gcc. I have all this installed and still keep getting this error!
./psycopg/psycopg.h:35:10: fatal error: libpq-fe.h: No such file or directory
So to install libpq-fe-h I need to sudo apt-get install libpq-dev..
..which returns..
libpq-dev is already the newest version (10.10-0ubuntu0.18.04.1).
Then when I sudo find / libpq-fe.h it doesn't seem to be in my OS.
I am lost at this point. If anyone can help I would highly appreciate it.
For some reason, the file is missing on the system.
As you're using apt-get, the system is dpkg based, presumably Debian or it's derivative. You can try the Ubuntu's package search to get which package contains a file with name ending in libpq-fe.h.
I found the package is libpq-dev and file's absolute path is /usr/include/postgresql/libpq-fe.h.
FWIW, on a dpkg based system, you can check which package gives a file if you know the file's absolute path:
% dpkg -S /usr/include/postgresql/libpq-fe.h
libpq-dev: /usr/include/postgresql/libpq-fe.h
Also, unlike find, locate keeps a cache of found files (mlocate.db) that is created everyday via cron; so if the file happens to be removed after the last run, you can run locate libfq-fe.h to get the absolute path to the file without needing to check the Ubuntu package search online.
So the package is libpq-dev. Now, reinstalling it will get everything to the default state i.e. all relevant files will be copied to the right places. As it is only a library package, no user/system level configurations will be overridden (and dpkg will prompt you for action for any package that does that).
To reinstall the package:
sudo apt-get install --reinstall libpq-dev
For me, I realized it was trying to use the deprecated setup.py so I installed wheel (pip install wheel) and that sorted it all out.
Well after installing these libraries
sudo dnf install python-virtualenv openssl-devel gcc libffi-devel libxslt-devel issue was not gone.
I used mlocate to find where libpq-fe.h file is located. On my system (Fedora 32) it was located at /usr/pgsql-10/include/libpq-fe.h
yum install mlocate
sudo updateb
locate libpq-fe.h
After all added this line to ~/.bash_profile
nano ~/.bash_profile
export PATH=/usr/pgsql-10/bin/:$PATH
Works fine, I can easily install psycopg2 without any trouble.
You need to create a LD_LIBRARY_PATH that indicates the path of your library /user/pgsql-11/lib
Source: The 3rd point of build prerequisites at https://www.psycopg.org/docs/install.html#build-prerequisites

Docker doesn't download recommended packages

I am using docker for a Python application.
FROM python:3.5-slim
WORKDIR /abc
ADD . /abc
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl \
gcc \
python3-dev \
musl-dev \
&& \
pip install -r requirements.txt &&\
apt-get clean && \
rm -rf /var/lib/apt/lists/* &&\
apt-get purge -y --auto-remove gcc
So whenever I am running the docker build command it first runs the apt-get update command there.
With update command, it's also downloading many recommended packages and taking long build time.
How can I stop Ubuntu from installing recommended packages and build docker faster?
Note: In the Dockerfile, apt-get --no-install-recommends update is not working; it's still downloading packages.
apt-get update should not install anything. The only thing apt-get update should do is update the local description of what packages are available. That does not download those packages though -- it just downloads the updated descriptions. That can take a while.
apt-get install will of course install packages. In order to install those packages, it needs to download them. Using --no-install-recommends tells apt-get to not install "recommended packages". For example, if you install vim, there are many plugins that are also recommended and provided as separate packages. With that switch, those vim plugins will not be installed. Of course, installing the packages you selected can also take a while.
What you're doing, using && \ is to put all of that into a single docker command. So every time you rebuild your image, you will have to do that every time because the list of packages changes every day, sometimes even multiple times per day.
Try moving pip install -r requirements.txt to its own RUN command after you've run apt-get stuff. If that then does what you want, then I suggest reading and learning more about how Docker works under the hood. In particular, it's important to understand how each single command adds a new layer and how any dynamic information in a single layer can cause long build times because the layer will frequently change with large amounts of changes.
Additionally, you might want to move ADD . /abc to after the RUN commands. Any changes you've made to the files being added (source code, I assume) will invalidate the layer which represents the apt-get command that has been executed. Since it's been invalidated, it will need to be rebuilt. If you're actively working on and developing those projects, that can easily cause apt-get to be executed every time you build your image.
There are plenty of resources you can search for which discuss how to optimize your time when using Docker. I won't recommend any specific one and will leave it to you for learning.

Ubuntu Linux - Install Packages from Local Repository

We (my workplace) have an embedded product based around an NVIDIA TK1 which is running a custom build of Ubuntu for ARM. As part of the setup routine for our product, we have a custom script which downloads a large number of packages and archives from the web, extracts and installs them.
Ideally what we are looking to do is to pre-download these packages and archives into a "local" repository so that our product uses known versions which work with our application. Auto-updating of packages is disabled and the end product will rarely have access to the internet anyway, we just want to ensure that the versions of packages used remains the same for EVERY product shipped.
As an example, here are parts of the update script:
sudo apt-get install -y linux-firmware
sudo apt-get install -y '.*libxcb.*' libxrender-dev libxi-dev libfontconfig1-dev libudev-dev libx11-dev libx11-xcb-dev libext-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
sudo apt-get install -y gcc-4.9 g++-4.9
sudo apt-get install -y ros-indigo-desktop
# This one is shortened as its a long URL
wget cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
sudo dpk -i cuda-repo-l4t-r21.1-6-5-prod_6.5-14_armhf.deb
Obviously, a large number of the packages will have a lot of dependencies so if there is a way of downloading ALL required packages and ALL required dependencies into a local package repository and change my scripts so they are installed from there, that would the ideal.
I'm unsure as which way would be the best way to approach this.
I have had suggestions of installing all packages onto a base product then "cloning" the file system and pushing it onto other modules however I don't really know the pros/cons of doing this.
UPDATE
Ok, so I've since found a large number of packages in the /var/cache/apt/archives/ folder which seem to all relate to what is installed from our script.
Is it feasible/safe to install ALL of these packages using sudo dpkg -i *.deb?
A good thing would be to use some kind of debootstrap to build a custom image and then flash it on the devices.
To cache the apt-get there is apt-cacher. I haven't tried, but its cache can be freezed.
For the local repositories - reprepro.

How do I keep a dynamic URL updated?

I am writing a that will allow me to automate the customization of a fresh Linux installation. The script generally runs
sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
for all programs that are in the apt repository, but I'm here to ask for help about programs that aren't found in a package manager.
For example, lets say I want to install "IDEA".
Manually I'd
Go to the Jetbrains website
Download the appropriate .tar.gz file
Install said .tar.gz
My question stems from the second step "Download the appropriate .tar.gz file". If I were to download this manually now, I'd use wget against the URL "https://download-cf.jetbrains.com/idea/ideaIU-2016.1.3.tar.gz".
As you can see, the URL changes with each version. My question is, how would you write a BASH script to always download the newest version of Idea.
I am also looking into programs like Vagrant and the likes, but still curious to see if there is a solution for this problem.

how to update make 3.81 linux

I am new to Linux (new as in installed it yesterday), I need it for my programming course in the university and I've been told to install specific versions of specific programs, but though I've used apt-get install to install them (having previously done apt-get update) they aren't in the correct version.
The programs that I need are make 4.0 and valgrind 3.10.1.
apt-get installs make 3.81 and valgrind 3.10.0.SVN.
I have tried typing "apt-get install make4.0" and "apt-get install valgrind10.3.1" to no avail. I have downloaded them from the internet and followed what instructions I could understand to install the newer versions but it keeps saying that I have the older ones. (I'm not sure if I can post direct links here, if I can let me know and I'll post where I got them from).
What have I been doing wrong? How can I fix this?
I am currently running Linux Mint.
Thanks for any answer in advance.
Due to a long-standing unresolved Debian bug report, GNU Make remained the age-old 3.81 in Debian for a very long time, and as a consequence, in Debian-based distributions such as Ubuntu and Mint.
The latest Debian release, Jessie, has upgraded to 4.0, so Debian-based distributions will have that upgrade. However, it is better to use 4.1.
This has been discussed many times on the GNU Make mailing list and elsewhere.
So to get a newer version, you must compile it from scratch.
This is easy:
Install the required packages (gcc, make and such).
Open up a shell (if you're using the GUI, a terminal window).
Type the following commands (or something equivalent, e.g. you can use curl instead of wget):
cd /tmp
wget http://ftp.gnu.org/gnu/make/make-4.1.tar.gz
tar xvf make-4.1.tar.gz
cd make-4.1/
./configure
make
sudo make install
cd ..
rm -rf make-4.1.tar.gz make-4.1
Now, make 4.1 is in /usr/local/bin/make.
You can verify it is there with whereis make.
You can make it your default make by prefixing /usr/local/bin to your $PATH variable in your shell startup file; for instance, in .profile or .bashrc if you use the bash shell.
Don't try to install a self-compiled make (or anything else that doesn't come from the distribution's package manager) into /bin or /usr/bin; doing that will confuse your package manager.

Resources