How to have dedicated Jupyter notebook configuration files on one machine - node.js

I am running a Windows 10 machine, where I have a Python installation installed from one of the programs I am working with. This leads to dependencies of this program to specific versions of Python packages, including Jupyter and Jupyterlab, and I cannot update/upgrade them without breaking the functionality in the original program.
Hence, I decided to install a more recent version of Python in addition to the one I already have on my machine. That was also not the issue, and installing all the packages I was after went fine so far.
However, even though I installed nodejs and npm within the new version of Python, when attempting to install a widget in Jupyterlab, it still does not recognize the packages.
In addition, wen running jupyter-lab.exe --generate-config, I am getting asked, if I want to override the existing configuration file.
I have no intention to do so, but would like to be able to configure the different jupyter notebook environments separate from each other.
Is there a possibility to do so?

Related

jupyter labs stopped importing packages from virtual environment

I'm using Jupyter Labs with a virtual environment. It previously recognized the environment and correctly installed the packages. A few days ago, it stopped importing some of the packages, although neither the virtual environment nor the kernel was changed.
The packages are pandas and transformers, and I can check that they're installed by running conda list after the environment is activated. I'm getting the error "module not found" in Juypter.
I've tried the following but I still get the error:
Stopped the session from the command line and restarted Jupyter
Tried a different virtual environment
Tried switching to a kernel based on the new environment
Read the documentation from JuypterLabs on troubleshooting but there are no new files.
However, the packages will install when I run commands directly from the terminal instead of from Juypter. I used conda to install all the packages, and when I run the jupyter --paths command it appears that everything is in order.
I'm not sure what the error is that's causing this behavior, or how to fix it. I'm using a Mac. Thanks in advance.

Python installed on D: previously, after a soft reset I installed it on C:. Some programs are pulling the old python file path. How can I change it?

Fatal error in launcher: Unable to create process using '"D:\python.exe" "D:\Scripts\pip.exe" -V': The system cannot find the file specified.
this was from running
pip -V
both inside and outside of a python environment. I was trying to install pipenv for a project.
I was eventually able to use a terminal inside of visual code studio to get pipenv to run.
however as soon as I tried to use
pip install pkg name
and also
python -m pip install pkg name
it returned the earlier error.
I think when I did a soft reset of my pc. Formatted the external drive that was mostly games and some art programs and added a new ssd it kept the old path intact. I did not actually do a full reset of windows but I did uninstall most apps and wiped the D: drive prior to reinstalling most of the programs and a handful of the games. However when I reinstalled python I installed it on my C: drive since I had migrated the games off of my main ssd and now had plenty of room for project fun.
however the
Unable to create process using '"D:\python.exe" "D:\Scripts\pip.exe"
leads me to believe that it is still active in some way. How can I reset this to use the C: PATH for everything?

Python, Anaconda & PyCharm multiple versions of Python3

I just installed Anaconda3-2019-10 on my MacBook.
I tried to make sure that my previous Python 3 version was totally uninstalled / removed from my system. Typing python3to the terminal didn´t work anymore.
After installing Anaconda and PyCharm (pycharm-community-anaconda-2019.3.3) I started a new Project to test everything. For that I selected to create a new Conda environment:
After I created the process I checked the Preferences and the "Project Interpreter". This is what I found:
I expected to find two interpreters 1.) my 3.7 Python version and 2.) the Conda environment just created.
Does finding 3 versions mean that I didn´t correctly deinstall Python3 before installing anaconda or is there anything that I don´t understand here?
Do I need both versions?
If not is there a safe way to remove one of them?
For removing Python3 from my system I did almost everythin suggested in numerous posts in Stackoverflow.
Upon creating a venv(virtual environment) you no longer need to worry about the existing interpreter. https://docs.python.org/3/tutorial/venv.html this might be of help.

RHEL 7.6 - Built Python3.6 from Source Broke Network

I have a RHEL system which by default was running Python2.7 and Python3.4
I needed Python3.6 for a project I wanted to work on and so I downloaded it and built it from source. I ran make and make install which hindsight may have been the wrong decision.
Now I do not seem to have any internet connectivity. Does anyone know what I may have over written to cause this or at least where specifically I can look to track this issue down?
Note: I can Putty into the Linux machine but it doesn't seem to have any other connectivity, specifically HTTPS
It's a bit weird that this would break network connectivity. One possible explanation is that the system has networking scripts or a network manager that relies on Python, and it got broken after make install replaced your default Python installation. It may be possible to fix this by reinstalling your RHEL Python packages (sorry, cannot offer more detailed help there, as I don't have access to a RHEL box).
I guess the lesson is "be careful about running make install as superuser". To easily install and manage different Python versions (separate from the system Python), the Anaconda Python distribution would be a good solution.
I suggest to undo that 3.6 installation and use the Software Collections version of python 3.6. See here for python 3.6 installation. Software Collections install "along side" the original versions so as to not affect the OS - and they are included in the subscription.
So after a lot of time slamming my head against the wall I got it worked out. My best guess is that the system (RHEL 7) relied on something from its default Python2.7 installation to handle SSL negotiations. Installing 3.6 alongside must have overwritten some pointer. Had I done this correctly, with altinstall all would have likely been fine.
The most frustrating part of this is that there were no error messages, connections just timed out.
To fix this, I had to uninstall all Python versions and then reinstalled Python2.7 - Once Python2 was back in the system it all seemed to work well.

Pyspark + Hadoop + Anaconda install

I'm new to Anaconda, Spark and Hadoop. I wanted to get a standalone dev environment setup on my Ubuntu 16.04 machine but was getting confused on what I do within conda and what is external.
So far I had installed Anaconda and created a Tensorflow environment (I will be using TF too I should add), and installed PySpark separately outside of Anaconda.
However I wasn't sure if that's what I'm supposed to do or whether I am supposed to use conda install? I'm also eventually going to want to install Hortonwork's Hadoop too - and hear that may already come bundled with spark.
Long story short - I just want to get a dev environment set up with all these technologies that I can play around with and have data flow from one to the other as seamlessly as possible.
Appreciate any advice on the "correct" way to set everything up.

Resources