In ipython notebook I found two (more or less) ways to secure my remote server which is running as my notebook host.
The c.NotebookApp.password option in the config secures the notebook from write access.
The --read-only flag allows not authenticated useres only to view my notebook.
But with point 2. I am not getting warm.
The problem is that it allows anybody to view my notebook. Actually I only want some privileged users to view my notebook. Until now I havn't found any way to do so.
Is there the possibility to secure my notebook globaly with, e.g. a .htaccess file or anything else?
In that case I can give all users the website password and I can change the notebook with my 1.option.
There is no built-in mechanisme for auth and read-only notebook in IPython. And security is complex enough to start adding option for Auth+Read-only at the moment.
You can also install a local copy of nbviewer and/or use nbconvert to export your notebook as static html, and serve using a classical appache/.htaccess scheme.
Related
With databricks-connect we can successfully run codes written in Databricks or Databricks notebook from many IDE. Databricks has also created many magic commands to support their feature with regards to running multi-language support in each cell by adding commands like %sql or %md. One issue I am facing currently is when I try to execute Databricks notebooks in Pycharm is as follows:
How to execute Databricks specific magic command from PyCharm.
E.g.
Importing a script or notebook in Done in Databricks using this command-
%run
'./FILE_TO_IMPORT'
Where as in IDE from FILE_TO_IMPORT import XYZ works.
Again everytime I download Databricks notebook it comments out the magic commands and that makes it impossible to be used anywhere outside Databricks environment.
It's really inefficient to convert all databricks magic command everytime I want to do any developement.
Is there any configuration I could set which automatically detects Databricks specific magic commands?
Any solution to this will be helpful. Thanks in Advance!!!
Unfortunately, as per the databricks-connect version 6.2.0-
" We cannot use magic command outside the databricks environment directly. This will either require creating custom functions but again that will only work for Jupyter not PyCharm"
Again, since importing py files requires %run magic command so this also becomes a major issue. A solution to this is by converting the set of files to be imported as a python package and add it to the cluster via Databricks UI and then import and use it in PyCharm. But this is a very tedious process.
I created a deep learning instance inside the AI platform of google cloud . I use the built in jupyterlab notebooks running on the browser (I use chrome). Recently I have a problem with saving the code. autosave as well as saving the notebook files does not work. I keep see the message "saving started" when I try to save, but nothing other than that happens, and the code is not saved. I tried restart the kernel as well as restart the instance but the problem keeps returning. Anyone here encountered the same thing? have a solution?
thanks
What worked for me eventually was instead of using https connection I connected to the instance via ssh and then to accessed jupyterlab in local host. I followed this link:
https://cloud.google.com/ai-platform/deep-learning-vm/docs/jupyter
export PROJECT_ID="my-project-id"
export ZONE="my-zone"
export INSTANCE_NAME="my-instance"
gcloud compute ssh --project $PROJECT_ID --zone $ZONE \
$INSTANCE_NAME -- -L 8080:localhost:8080
I had a similar problem, I opened a notebook and GCP did not save it,
but after I started the notebook from a folder inside Jupyterlab - it saved it.
I'm very confused in regards to whether it's possible to switch between environments in one script. I'm currently working Jupyter Notebooks and realized I do need a package that is in a separate (non-root) environment and would like to be able to import/use it. I initially tried to just load it into my root env, but it has so many conflicts, it's not possible. So trying to switch env mid script- is this not possible without going through hoops and going through Anaconda prompt? I've looked at about 30 articles and/or stackoverflow pages and it seems like my only option is to add my environment to the default sys.path within Jupyter OR potentially create another environment that contains both environment packages. This suggestion was unclear...not sure how to run this and I don't want to run through a shell/prompt either.
These were good resources:
RealPython: On Virtual Envs
StackOverflow: In which conda env is Jupyter executing?
Any clarity I can get on the topic for someone who doesn't understand this well, would be appreciated (Please explain as if I'm a Level 0/Beginner).
Now I'm looking back on this old post, thought I'd answer it since it never got an answer. While it's extremely frustrating to realize that needed packages don't mesh together mid-project, for each significant project, a new environment should be created. I still have a general environment that I use for minor projects, but for situations like this, I use anaconda's cheat sheet to make sure that in the new environment I'm creating, packages will all sync together by specifying versions directly in the creation in the environment. Also, getting comfortable with creating/copying .yml files and loading them directly into conda have also been helpful.
conda env create --file envname.yml
Once that's done, if I'm using jupyter notebooks, making sure it's been established as a kernel has been important.
Of course, if you're using an IDE like pycharm, you can bypass the conda prompt (pycharm is still talking to conda), which is sometimes just easier if you're on a deadline rather than going back and forth between different software.
I am trying to install nltk and download the nltk data. I am trying this on python 3.7.3 and my pip is up to date. my PC is windows 10 and given by the company.
My Installation of nltk is succesfull but it wont downlaod the data. It dont give me GUI to chose downloads nor finishes the download. The cursor keeps blinking forever.
I have tried this running it as a an admin, ran it through Jupyter Notebook. Never saw the GUI at all/
Who knows what's going on? Maybe the download window is hidden behind other windows; I've seen that a lot. Or maybe it really doesn't come up. Either way, if you can't find the window you can largely work around the problem by using the non-GUI form of the downloader:
nltk.download("book") will download all the resources you'll need while reading the book. I recommend you just run this one and move on to exploring the nltk.
nltk.download("all") will download everything in the download store. Probably overkill.
nltk.downlead(<name>) will download resource <name> (e.g., 'average_perceptron_tagger' for the tagger data, etc.) If you try to use a module and it's missing a resource, it will usually tell you what you need to download.
There are some other collective names including all-corpora, popular, and third-party, but the most useful ones are the above, I believe.
This has been resolved. I had to bypass web connection proxy server at my work. Instead, they added permissions for the http://files.pythonhosted.org and this worked fine.
I've set up a vm with Deep Learning Virtual Machine (Microsoft Azure).
Normally, I connect to the vm thanks to ssh etc
Then I run jupyter by writing jupyter notebook --no-browser.
But this time I have can't run jupyter notebook because there is this message Bad config encountered during initialization: "No such notebook dir: ''/dsvm/Notebooks''"
How can I fix that ?
Thanks for your help !
I presume you are trying to run Jupyter Notebook and with that goal in mind, I suggest you follow the following steps:
Move your notebook to ~/notebooks/
Find your Pubic IP Address of your VM from Azure Dashboard
Access https://your_public_ip_address:8000 in your web browser and log in using your vm login credentials
You should be able to see all the files you have in ~/notebooks/
I presume this method is defined by Azure for security reasons, to prevent people from having an open port without authentication. Hope this helps!
It worked for me :
jupyter notebook --notebook-dir=/home/$USER/notebooks --no-browser