Why does my Cygwin setup tell me Python3.6 not installed? - python-3.x

Python 3.6 has been installed (using the Cygwin setup .exe file, update, etc.). The executable is located in /bin/ ... or is it located in /usr/bin/? Cygwin ls command shows that /usr/bin exists... but on Windows this directory is non-existent. Also the contents of both directories are identical, including if I change a filename... but I haven't as yet found a symlink (in /usr or in / ) to explain this!
I'm struggling to get virtualenvwrapper installed (this is part of a preparation required to follow along with a book, TDD With Python).
I just overcame a first hurdle (eventually) by realising I had to install virtualenvwrapper using pip3, not pip! ... I feel like I'm in at the deep end.
So I did:
pip3 install virtualenvwrapper
echo "source virtualenvwrapper.sh" >> ~/.bashrc
source ~/.bashrc
... then I did
mkvirtualenv --python3=`py -3.6 -c"import sys; print(sys.executable)"` superlists
(NB "python3" is the correct name of the symlink which points to the Python3 executable in /bin/; there is a "python" symlink but that points to Python2.7)
And I got:
Requested Python version (3.6) not installedUsing base prefix '/usr'
New python executable in
/home/Chris/.virtualenvs/superlists/bin/python3Also creating
executable in /home/Chris/.virtualenvs/superlists/bin/python
Installing setuptools, pip, wheel...done.
virtualenvwrapper.user_scripts creating
/home/Chris/.virtualenvs/superlists/bin/predeactivate
virtualenvwrapper.user_scripts creating
/home/Chris/.virtualenvs/superlists/bin/postdeactivate
virtualenvwrapper.user_scripts creating
/home/Chris/.virtualenvs/superlists/bin/preactivate
virtualenvwrapper.user_scripts creating
/home/Chris/.virtualenvs/superlists/bin/postactivate
virtualenvwrapper.user_scripts creating
/home/Chris/.virtualenvs/superlists/bin/get_env_details (superlists)
Anyone know what's going on? How do I get the system to recognise that Python3.6 is actually installed?
Later Or... am I being very dense? Maybe making a virtual environment using this module always involves installing a new Python executable?
Later still I'm still not clear about this... but it isn't stopping me from using virtualenv and virtualenvwrapper and generally getting on with the book. Despite complaining Python doesn't exist the setup appears (as far as I can tell!) to be using the symlinks under the directories in directory .virtualenv/ to one of the Python symlinks in /bin/ ...

About first question
/usr/bin and /usr/lib are by default also automatic mount points
generated by the Cygwin DLL similar to the way the root directory is
evaluated. /usr/bin points to the directory the Cygwin DLL is
installed in, /usr/lib is supposed to point to the /lib directory.
https://cygwin.com/cygwin-ug-net/using.html#mount-table
For the second, to check if phyton3 is installed
$ cygcheck -c python3
and as mentioned by phd the py command is not a cygwin one, so probably you are mixing something.

Related

All Python scripts stopped working: path changed in all projects after macOS Monterey 12.6 update

After updating macOS Monterey today to 12.6 all my Python projects/scripts have stopped working.
Checking the symlink of a Python binary in one of my project's venv, I see the original to be:
/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/bin/python3.9
which is not in path list!? 🤔
Installed with the macOS update today?
it seems that all the symlinks to Python in the virtual environments of all of my VS Code projects now point to the binary above, and none of the projects work anymore, all with ModuleNotFoundError.
How can I get all my projects to work again?
Background/details:
.bash_profile:
# Setting PATH for Python 3.8
PATH="/Library/Python/3.8/bin:${PATH}"
export PATH
.zprofile:
PATH="/Library/Python3.9/bin:${PATH}"
export PATH
eval "$(/opt/homebrew/bin/brew shellenv)"
.zshrc:
export PATH=$PATH:/Users/xxxx/Library/Python/3.8/bin
The scripts run with venv activated ((venv)...), confirmed with which python which accurately returns the path of Python in venv (though symlinked to the weird path).
When trying to (re)install those libraries, with venv still active, they get installed in root Python with following warning:
WARNING: The scripts f2py, f2py3 and f2py3.9 are installed in '/Users/xxxxxx/Library/Python/3.9/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
I'm using /bin/zsh when run in VS Code. Same from stand-alone terminal window.
I shared Bash above as listed in paths.
tried:
echo $PATH
returns (edited for readability):
/opt/homebrew/bin:
/opt/homebrew/sbin:
/Library/Python3.9/bin:
/usr/local/bin:
/usr/bin:
/bin:
/usr/sbin:
/sbin:
/usr/local/MacGPG2/bin:
/Library/Apple/usr/bin:
/Users/xxxx/Library/Python/3.8/bin
notes:
the last path (/Users/..) seems to be the one used by Bash but not Zsh/VS Code/. This was working as is before the update though.
need to add missing path: /Users/xxxx/Library/Python/3.9/bin
checking paths one by one:
/opt/homebrew/bin: no Python binary in folder
/opt/homebrew/sbin: no Python binary in folder
/Library/Python3.9/bin: folder Python3.9 does not exist
/usr/local/bin: /usr/local/bin/python found +++
/usr/bin: /usr/bin/python3 found +++
/bin: no Python binary in folder
/usr/sbin: no Python binary in folder
/sbin: no Python binary in folder
/usr/local/MacGPG2/bin: no Python binary in folder
/Library/Apple/usr/bin: no Python binary in folder
/Users/xxxx/Library/Python/3.8/bin: no Python binary in folder 🤔
not returned with echo $PATH / need to be added: /Users/xxxx/Library/Python/3.9/bin also no Python binary in folder 🤔
not listed but symlinked now in all projects: /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/bin/python3.9
I had the same issue in VS Code running notebooks. I uninstalled and reinstalled the Jupyter extension. Then when attempting to run a notebook for the first time again, there is a prompt to install jupyter notebook. Once done, all the notebooks are running fine again for all the environments I am using.

Virtualenv python not recognizing .pth for custom site-packages

I installed a new virtual env with python3.6. After checking its site packages (python3.6 -m site --user-site) I was pointed to /Users/username/.local/lib/python3.6/site-packages.
I added a file named custom.pth with the contents:
/Users/username/Python Files/Packages
but for some reason, it still fails to recognize our inhouse packages. However, I have the exact same .pth file for the system's python3.6 and it works without a hitch. Is there something else I can try?
It finally worked when I placed the .pth file into the <venv folder>/lib/python3.6/site-packages/ directory. Perhaps it's an issue with the activated venv overriding the python's site-packages
Python virtual environments are intended to isolate packages from both global and user site-packages. User site-packages are not available to Python in a virtual environment. Compare python -c "import sys; print(sys.path)" before and after activating a venv. Hence their .pth files are not processed.
Outside of a venv .pth files from the user site-packages are processed.

Pip freeze doesnt show freshly installed packages with Pycharm

I use Pycharm to create and manage my virtualenvs in my projects.
The problem is that after adding a library with pycharm, when I type the command (pip3 freeze --user), the library does not appear in the command result.
I have to manually type the pip install command each time so that the library is visible.
What manipulation should I do in PyCharm to solve this problem?
For what you are saying, the first thing that comes to mind is that you should use:
pip freeze
And not
pip3 freeze
Because the command mapped to the pip version when you have virtualenv activated is the first. Note that for installing you seem to use pip, and not pip3
Moreover, the --user option afaik is related to the packages installed in the user folder:
--user Install to the Python user install directory for your platform. Typically
~/.local/, or %APPDATA%\Python on
Windows. (See the Python documentation for site.USER_BASE for full details.)
If your packages are installed in the virtualenv folder, I would tell you to not use that option.
Also please make sure you have your virtualenv activated. In linux you can do so by source path/to/virtualenv/activate
Edit
I understand that the reason you are using pip3 is because you may have different versions of Python in your machine. Let me explain you a bit further how it works, because version management is usually a headache for many programmers and it is common to find problems when doing so.
If you install different versions of Python in your linux machine, and you do that as root, then the installation will proceed for the whole system. Usually Python2 installation folder for Linux machines is /usr/bin/python. However, I am uncertain of which directory is used for Python3 installations. You can check that easily by doing whereis python3. You can serach the path to binary of any command by doing whereis command. Note that this works also for whereis python as far as you don't have virtualenv activated.
Aditionally, the link to the binary of a command (or the set of instructions to be exectued, more broadly) is defined in certain folders in Linux, depending on whether you created the command as root or as a user, and possibly also on the distro. This works differently in Windows, that uses the Registry Edit utility to handle command mappings. When you enable your virtualenv, what you are doing is creating an environment that enables mapping system commands such as python to the Python installation in your virtualenv folder.
When you disable the virtualenv, the command points again to the default installation path. Same happens with pip, so incorrect usage of this tool may result in different packages being installed in different locations, and therefore not appearing available for the right Python version at any given circumstance.
In Linux, environment variables are shell dependent, though you can write them out with echo $variable and set them with variable=value (from bash). The search path is simply called PATH and you can get yours by typing echo $PATH.
Source: https://askubuntu.com/a/262073/426469
I encourage you to check other questions in SE network such as this: https://unix.stackexchange.com/a/42211/96121, to learn more about this.
Addendum
Quick tip: it is common to use the pip freeze command as follows:
pip freeze > requirements.txt
It is a standard that leads to understanding that modules in such file are required for the correct functioning of your application. That lets you easily exclude the virtualenv folder when you install the program in another computer, since you can readily know the requriments for a fresh installation. However, you can use the command as you want.

Python 3.6 (ubuntu 16.04) venv installing symlink not binary in environment's /bin

In Ubuntu 16.04, when I do python3.6 -m venv myenvironment, venv does not install the python 3.6 binaries in ../myenvironment/bin but only symlinks to the system binaries. The official docs for 3.6 suggests otherwise:
28.3.1 . . . (venv) also creates a bin subdirectory containing a copy of the python binary.
Unless by "copy" they mean a symlink, venv is not doing what the doc says. The pip binaries are there, but not python 3.6 or any other version binaries.
Maybe it doesn't matter since within that environment 3.6 will be the version used. But then which modules/packages will it use? venv did create the lib/python3.6/site-packages subdirectory as expected. Should I assume that unless I put a different version of a module/package there that the system-wide library will be used within this virtual environment?
Edit: In partial answer to my own question, it seems the default on Ubuntu 16.04 is for venv to not install the python binaries. Adding the --copies option forces venv to place the binaries into the env's ../bin directory. venv creates a subdirectory for the env's site-packages as expected. It does not, however, put the standard library modules/packages into the env's ../lib/python3.6 sub-directory.
According to the output of print(sys.path) (when run from within the env), the env's python copy still looks in the system's /usr/lib/python3.6 directory for the standard library. I guess that's okay if that's how it should work, but does one ever want to use a particular version of a module or package in the standard library, or is that just not ever done and only none-standard-library modules/packages are placed in the env's site-packages subdirectory?
2nd Edit: Here's an SO Q&A on the venv's behavior regarding standard library modules and packages:
Where is the standard library in python virtual environment?
The gist of the answers is that venv does not create a copy of the standard library in the env's directories. The system-wide standard library is always used. At least one of the participants in that Q&A commented it was odd that venv behaves that way. Seems odd to me as well. Does anyone know why?

unable to execute 'x86_64-conda_cos6-linux-gnu-gcc': No such file or directory (pysam installation)

I am trying to install pysam.
After excecuting:
python path/to/pysam-master/setup.py build
This error is produced:
unable to execute 'x86_64-conda_cos6-linux-gnu-gcc': No such file or directory
error: command 'x86_64-conda_cos6-linux-gnu-gcc' failed with exit status 1
There are similar threads, but they all seem to address the problem assumig administriator rights, which I do not have. Is there a way around to install the needed files?
DISCLAIMER: This question derived from a previous post of mine.
manually installing pysam error: "ImportError: No module named version"
But since it might require a different approach, I made it a question of its own.
You can also receive the same error while installing some R packages if R was installed using conda (as I had).
Then just install the package by executing: conda install gxx_linux-64 to have that command available.
Source:
https://github.com/RcppCore/Rcpp/issues/770#issuecomment-346716808
It looks like Anaconda had a new release (4.3.27) that sets the C compiler path to a non-existing executable (quite an embarrassing bug; I'm sure they'll fix it soon). I had a similar issue with pip installing using the latest Miniconda, which I fixed by using the 4.3.21 version and ensuring I was not doing something like conda update conda.
See https://repo.continuum.io/miniconda/ which has release dates and versions.
It should now be safe to update conda. This is fixed in the following python packages for linux-64:
python-3.6.2-h0b30769_14.tar.bz2
python-2.7.14-h931c8b0_15.tar.bz2
python-2.7.13-hac47a24_15.tar.bz2
python-3.5.4-hc053d89_14.tar.bz2
The issue was as Jon Riehl described - we (Anaconda, formerly Continuum) build all of our packages with a new GCC package that we created using crosstool-ng. This package does not have gcc, it has a prefixed gcc - the missing command you're seeing, x86_64-conda_cos6-linux-gnu-gcc. This gets baked into python, and any extension built with that python goes looking for that compiler. We have fixed the issue using the _PYTHON_SYSCONFIGDATA_NAME variable that was added to python 3.6. We have backported that to python 2.7 and 3.5. You'll now only ever see python using default compilers (gcc), and you must set the _PYTHON_SYSCONFIGDATA_NAME to the appropriate filename to have the new compilers used. Setting this variable is something that we'll put into the activate scripts for the compiler package, so you'll never need to worry about it. It may take us a day or two to get new compiler packages out, though, so post issues on the conda-build issue tracker if you'd like to use the new compilers and need help getting started.
Relevant code changes are at:
py27: https://github.com/anacondarecipes/python-feedstock/tree/master-2.7.14
py35: https://github.com/anacondarecipes/python-feedstock/tree/master-3.5
py36: https://github.com/anacondarecipes/python-feedstock
The solution that worked for me was to use the conda to install the r packages:
conda install -c r r-tidyverse
or r-gggplot2, r-readr
Also ensure that the installation is not failing because of admin privileges.
It will save you a great deal of pain
After upgrading Golang to 1.19.1, I started to get:
# runtime/cgo
cgo: C compiler "x86_64-conda-linux-gnu-cc" not found: exec: "x86_64-conda-linux-gnu-cc": executable file not found in $PATH
Installing gcc_linux-64 from the same channel, has resolved it:
conda install -c anaconda gcc_linux-64
Somewhere in your $PATH (e.g., ~/bin), do
ln -sf $(which gcc) x86_64-conda_cos6-linux-gnu-gcc
Don't put this in a system directory or conda's bin directory, and remember to remove the link when the problem is resolved upstream. gcc --version should be version 6.
EDIT: I understand the sentiment in the comments against manipulating system paths, but maybe we can use a little critical thinking for the actual case in hand before reciting doctrine. What actually have we done with the command above? Nothing more than putting an executable (symlink) called x86_64-conda_cos6-linux-gnu-gcc in one's personal ~/bin directory.
If putting something in one's personal ~/bin directory broke future conda (after it fixes the C compiler path to point to gcc it embeds), then that would be a bug with conda. Would the existence of this verbosely named compiler mess with anything else? Unlikely either. Even if something did pick it up, it's just your system gcc after all...

Resources