Can I create a C2 instance Notebook in AI Platform? - gcp-ai-platform-notebook

I tried to create a C2 instance in AI Platform notebooks, but is not shown in UI, is this Compute Instance supported?

Try using the CLI:
gcloud beta notebooks instances create c2-nb '--machine-type=c2-standard-4' '--vm-image-project=deeplearning-platform-release' '--vm-image-family=common-cpu-notebooks' '--location=us-central1-a'

Related

AWS EC2 Instance Security Group - All traffic

I am trying to launch an EC2 instance with security as 'all traffic' to create the jupyter notebook for personal practice. However I notice that the UI changed in AWS and where is this selection now? Should choose all the selections as in the attached file?
I followed the set-up instruction (a bit out-dated): https://www.udemy.com/course/spark-and-python-for-big-data-with-pyspark/learn/lecture/7005606#questions, but cannot launch the jupyter notebook (error showing page cannot be loaded)
Anyone can suggest? Thanks!!!
My setting in the ./jupyter/jupyter_notebook_config.py is as below:

Azure Synapse - How do I define a (Scala) utility class that I can use in other notebooks?

I'm migrating from Databricks to Azure Synapse. In Databricks, I have a couple utility classes that are defined in notebooks that start with:
package myUtilityNamespace
object myUtilityClass {...code...}
And then where I want to use that utility class I have a cell that runs the notebook that defines the utility class like:
%run ../myNotebookThatDefinesUtilityClass
Followed by a cell that imports the namespace:
import myUtilityNamespace
And then in subsequent cells I can use myUtilityClass.
How do I do the same thing in Azure Synapse? I feel dumb asking this question... how do I create a class???
When I try the same thing in Azure Synapse Studio I get this error in my console output:
error: illegal start of definition
package myUtilityNamespace;

Execute databricks magic command from PyCharm IDE

With databricks-connect we can successfully run codes written in Databricks or Databricks notebook from many IDE. Databricks has also created many magic commands to support their feature with regards to running multi-language support in each cell by adding commands like %sql or %md. One issue I am facing currently is when I try to execute Databricks notebooks in Pycharm is as follows:
How to execute Databricks specific magic command from PyCharm.
E.g.
Importing a script or notebook in Done in Databricks using this command-
%run
'./FILE_TO_IMPORT'
Where as in IDE from FILE_TO_IMPORT import XYZ works.
Again everytime I download Databricks notebook it comments out the magic commands and that makes it impossible to be used anywhere outside Databricks environment.
It's really inefficient to convert all databricks magic command everytime I want to do any developement.
Is there any configuration I could set which automatically detects Databricks specific magic commands?
Any solution to this will be helpful. Thanks in Advance!!!
Unfortunately, as per the databricks-connect version 6.2.0-
" We cannot use magic command outside the databricks environment directly. This will either require creating custom functions but again that will only work for Jupyter not PyCharm"
Again, since importing py files requires %run magic command so this also becomes a major issue. A solution to this is by converting the set of files to be imported as a python package and add it to the cluster via Databricks UI and then import and use it in PyCharm. But this is a very tedious process.

How to list Databricks scopes using Python when working on it secret API

I can create a scope. However, I want to be sure to create the scope only when it does not already exist. Also, I want to do the checking using Python? Is that doable?
What I have found out is that I can create the scope multiple times and not get an error message -- is this the right way to handle this? The document https://docs.databricks.com/security/secrets/secret-scopes.html#secret-scopes points out using
databricks secrets list-scopes
to list the scopes. However, I created a cell and ran
%sh
databricks secrets list-scopes
I got an error message saying "/bin/bash: databricks: command not found".
Thanks!
This will list all the scopes.
dbutils.secrets.listScopes()
You can't run the CLI commands from your databricks cluster (through a notebook). CLI needs to be installed and configured on your own workstation and then you can run these commands on your workstation after you configure connecting to a databricks worksapce using the generated token.
still you can run databricks cli command in notebook by same kind databricks-clisetup in cluster level and run as bash command . install databricks cli by pip install databricks-cli

Azure ML Workbench File from Blob

When trying to reference/load a dsource or dprep file generated with a data source file from blob storage, I receive the error "No files for given path(s)".
Tested with .py and .ipynb files. Here's the code:
# Use the Azure Machine Learning data source package
from azureml.dataprep import datasource
df = datasource.load_datasource('POS.dsource') #Error generated here
# Remove this line and add code that uses the DataFrame
df.head(10)
Please let me know what other information would be helpful. Thanks!
Encountered the same issue and it took some research to figure out!
Currently, data source files from blob storage are only supported for two cluster types: Azure HDInsight PySpark and Docker (Linux VM) PySpark
In order to get this to work, it's necessary to follow instructions in Configuring Azure Machine Learning Experimentation Service.
I also ran az ml experiment prepare -c <compute_name> to install all dependencies on the cluster before submitting the first command, since that deployment takes quite a bit of time (at least 10 minutes for my D12 v2 cluster.)
Got the .py files to run with HDInsight PySpark compute cluster (for data stored in Azure blobs.) But .ipynb files are still not working on my local Jupyter server - the cells never finish.
I'm from the Azure Machine Learning team - sorry you are having issues with Jupyter notebook. Have you tried running the notebook from the CLI? If you run from the CLI you should see the stderr/stdout. The IFrame in WB swallows the actual error messages. This might help you troubleshoot.

Resources