I would like to always start 'jupyter lab' with the --no-browser option. I realize I could create an alias, but my question is: How can I change jupyter_notebook_config.py to specify that I want that option
As an addition to the answer by Robert Lugg.
It might be the case that your version of JupyterLab expects configuration in a different file, for example, .jupyter/jupyter_server_config.py. Then
jupyter server --generate-config
will generate the file .jupyter/jupyter_server_config.py, which you can then edit setting
c.ServerApp.open_browser = False
I found it documented here:
https://jupyter-notebook.readthedocs.io/en/stable/config.html
The option is NotebookApp.open_browser
, so the entry in jupyter_notebook_config.py would be:
c.NotebookApp.open_browser = False
Related
I'm using JupyterLab version 1.2.3
I have generated a ~/.jupyter/jupyter_notebook_config.py file by running jupyter-lab --generate-config.
At the bottom of the file, I have added the line c.InteractiveShell.ast_node_interactivity = "all".
However, when I run jupyter lab, the notebook still behaves as if the default value for InteractiveShell.ast_node_interactivity were set.
Is any other step required to make the configuration file active? Or how can I "debug" to better understand what the problem is?
You must generate a config file in IPython with the command:
$ ipython profile create
the created file would be ipython_kernel_config.py
On this file, you must make the changes indicated
I installed Python 3.7 from Windows Store, it runs perfectly. I successfully installed Jupyter and other packages with pip from cmd.
The thing is when I run Jupyter (python -m notebook) it says
and automaticaly opens the browser with an ERR_FILE_NOT_FOUND page. Opening the notebook by copy-pasting the URLs works just as expected, so how can I tell Jupyter to run by default the URL instead of the file?
At my end, the file does not open either due to security settings of Chromium (ERR_ACCESS_DENIED).
There's a config parameter which controls how the browser gets access to jupyter: NotebookApp.use_redirect_fileBool (cf. jupyter docs).
In order to change this, create a config by jupyter notebook --generate-config and edit the config file: uncomment and replace the value of line #c.NotebookApp.use_redirect_file = True by c.NotebookApp.use_redirect_file = False. On next start of jupyter, the browser is started using the http URL instead of the file.
HTH
The solution above works (UBUNDU). You just have to bare in mind a few thinks:
1st: If you use jupyter notebook you need to craete this file jupyter notebook --generate-config and look for the line c.NotebookApp.use_redirect_file = True .
If you use jupyter-lab (as I do) you need to use this command jupyter-lab --generate-config . Also, in jupyter-lab you need to find this line c.ServerApp.use_redirect_file = True . In both cases this True needs to become False .
PS. I wanted to add it as a comment but I was not able to do so due to reputation or something.
2nd: It is given in the previous answer but its easy to be missed: you need to uncomment the line (remove the "#" at the start of the line)
I am trying to get mujoco_py running. When I do
import mujoco_py
I get this error:
Exception:
Missing path to your environment variable.
Current values LD_LIBRARY_PATH=
Please add following line to .bashrc:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/jonah/.mujoco/mjpro150/bin
I have added the above line to both /etc/skel/.bashrc and ~/.bashrc. If I run
echo $LD_LIBRARY_PATH
I get
/home/jonah/.mujoco/mjpro150/bin/
My .mujoco folder includes mjkey.txt and the mjpro150 folder. I can run ./simulate successfully, so I have a feeling that this is some kind of mujoco_py specific bug.
Which program do you use to import mujoco?
I had a similar issue using mujoco_py with PyCharm Community 2018.1. A workaround was to launch PyCharm from the terminal instead of using the launcher icon. Maybe it could help with your issue too.
Otherwise you could try adding the LD_LIBRARY_PATH to ~/.profile instead of ~/.bashrc, as proposed in this answer here: https://askubuntu.com/questions/1022836/python-not-recognizing-ld-library-path/1022913#1022913
You can try to reinstall Pycharm for the newest version.
After you save the .bashrc file your want execute this code.
source ~/.bashrc
Now link is updated.
Please check the user which you run the code with. The mismatch user will cause this problem. There is the checklist may help you:
Don’t use ‘sudo’ to run the code;
Don’t use ‘sudo’ or virtual environment (e.g., anaconda) to run Pycharm (If you run the code in Pycharm).
I am using the Jupyter notebook a lot, so I would like to make opening it as short and as easy as possible. Is there a way to avoid opening it by typing "Jupiter notebook" in the terminal? Is there some short key? (I am using it on Ubuntu 16.04)
If you really want, you can put this in your ~/.bashrc file
jupyter notebook
Each time you open a terminal, that command will execute automatically.
So, the first time would be okay but if you open up more terminals, that command will get executed each time.
However if you really want, you can write a script that will only execute that command if jupyter is not already running. (Use ps and grep for jupyter). If you do that, replace that 'jupyter notebook' line in your .bashrc with the name of your script.
How about an alias? Edit your bash profile and set an alias for jupyter notebook e.g. alias jpy="jupyter notebook". You can even specify notebooks for direct opening of a specific file, I think.
As already mentioned similar and helpful questions to this one are here and here. I tried to use on my new Mac the package nbopen, that other users have reported to do the job, but I couldn't manage to make it work properly. Eventually, this answer here brought me the desired solution.
I'm on Windows 10. I was trying to get Spark up and running in a Jupyter Notebook alongside Python 3.5. I installed a pre-built version of Spark and set the SPARK_HOME environmental variable. I installed findspark and run the code:
import findspark
findspark.init()
I receive a Value error:
ValueError: Couldn't find Spark, make sure SPARK_HOME env is set or Spark is in an expected location (e.g. from homebrew installation).
However the SPARK_HOME variable is set. Here is a screenshot that shows that the list of environmental variables on my system.
Has anyone encountered this issue or would know how to fix this? I only found an old discussion in which someone had set SPARK_HOME to the wrong folder but I don't think it's my case.
I had the same problem and wasted a lot of time. I found two solutions:
There are two solutions
copy downloaded spark folder in somewhere in C directory and give the link as below
import findspark
findspark.init('C:/spark')
use the function of findspark to find automatically the spark folder
import findspark
findspark.find()
The environmental variables get updated only after system reboot. It works after restarting your system.
I had same problem and had it solved by installing "vagrant" and "virtual box". (Note, though I use Mac OS and Python 2.7.11)
Take a look at this tutorial, which is for the Harvard CS109 course :
https://github.com/cs109/2015lab8/blob/master/installing_vagrant.pdf
After "vagrant reload" on the terminal , I am able to run my codes without errors.
NOTE the difference between the result of command "os.getcwd" shown in the attached images.
I had the same problem when installing spark using pip install pyspark findspark in a conda environment.
The solution was to do this:
export /Users/pete/miniconda3/envs/cenv3/lib/python3.6/site-packages/pyspark/
jupyter notebook
You'll have to substitute the name of your conda environment for cenv3 in the command above.
Restarting the system after setting up the environmental variables worked for me.
i have same problem, i solved it by closing cmd then open again. i forget that after editing env variable on windows that should restart cmd..
I got the same error. Initially, I had stored my Spark folder in the Documents directory. Later, when I moved it to the Desktop, it suddenly started recognizing all the system variables and it ran findspark.init() without any error.
Try it out once.
This error may occur, if you don't set the environment variables in .bashrc file. Set your python environment variable as follows:
export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.8.1-src.zip:$PYTHONPATH
export PATH=$SPARK_HOME/bin:$SPARK_HOME/python:$PATH
the simplest way i found to use spark with jupyter notebook is
1- download spark
2- unzip to desired location
3- open jupyter notebook in usual way nothing special
4- now run the below code
import findspark
findspark.init("location of spark folder ")
# in my case it is like
import findspark
findspark.init("C:\\Users\\raj24\\OneDrive\\Desktop\\spark-3.0.1-bin-hadoop2.7")