Parameter to notebook - widget not defined error - databricks

I have passed a parameter from JobScheduling to Databricks Notebook and tried capturing it inside the python Notebook using dbutils.widgets.get ().
When I run the scheduled job, I get the error "No Input Widget defined", thrown by the library module ""InputWidgetNotDefined"
May I know the reason, thanks.

To use widgets, you fist need to create them in the notebook.
For example, to create a text widget, run:
dbutils.widgets.text('date', '2021-01-11')
After the widget has been created once on a given cluster, it can be used to supply parameters to the notebook on subsequent job runs.

Related

Import a notebook into another book in Azure Synapse

I have one notebook1 in synapse ws with python code that contains some variable.
I have created another notebook2 and I'd like to use the variables from notebook1. I tried to import as a package, but it didn't work.
Has any one done that in the past.
You can get the variables from another Notebook by referencing using %run in present Notebook.
Example:
This is my Notebook2 in which I have defined two variables.
Then In Notebook1 reference the Notebook2 like below %run Notebook2.
When you execute it, you can access the variables from it.
Note: Make sure you publish the Calling Notebook(Notebook2) to access the recent changes in that Notebook.
and don't execute any code in %run cell otherwise it will error like below.

Is it possible to install a Databricks notebook into a cluster similarly to a library?

I want to be able to have the outputs/functions/definitions of a notebook available to be used by other notebooks in the same cluster without always have run the original one over and over...
For instance, i want to avoid:
definitions_file: has multiple commands, functions etc...
notebook_1
#invoking definitions file
%run ../../0_utilities/definitions_file
notebook_2
#invoking definitions file
%run ../../0_utilities/definitions_file
.....
Therefore i want that definitions_file is available for all other notebooks running in the same cluster.
I am using azure databricks.
Thank you!
No, there is no such thing as "shared notebook" that is implicitly imported. The closest thing you can do is to package your code as a Python library or into Python file inside Repos, but you still will need to write from my_cool_package import * in all notebooks.

how to pass static value into dynamic on basis of column value in azure databricks

how to pass static value into dynamic on basis of column value in Azure Databricks. Currently, I have 13 notebook and its scheduled ,so I want to schedule only one notebook and In addition, data of column( 13 rows) which I defined separate in 13 notebook so how I dynamically pass that value .
You can create different jobs that refer the single notebook, pass parameters to a job and then retrieve these parameters using Databricks Widgets (widgets are available for all languages). In the notebook it will look as following (for example, in Python):
# this is necessary only if you execute notebook interactively
dbutils.widgets.text("param1_name", "default_value", "description")
# get job parameter
param1 = dbutils.widgets.get("param1_name")
# ... use param1

%run magic using get_ipython().run_line_magic() in Databricks

I am trying to import other modules inside an Azure Databricks notebook. For instance, I want to import the module called 'mynbk.py' that is at the same level as my current Databricks notebook called 'myfile'
To do so, inside 'myfile', in a cell, I use the magic command:
%run ./mynbk
And that works fine.
Now, I would like to achieve the same result, but with using get_ipython().run_line_magic()
I thought, this is what I needed to type:
get_ipython().run_line_magic('run', './mynbk')
Unfortunately, that does not work. The error I get is:
Exception: File `'./mynbk.py'` not found.
Any help is appreciated.
It won't work on Databricks because IPython commands doesn't know about Databricks-specific implementation, and IPython's %run is expecting the file to execute, but Databricks notebooks aren't files on the disk, but the data stored in the database, so %run from IPython can't find it, and you get error.

How can I get Jupyter Notebook to import a python file?

I have a very simple .py file, displayD3caller.py:
def caller():
print("Here is a print statement from displayD3caller.py")
I can import and use the function defined in other files, and from the python shell. However, when I try to import the function in a Jupyter Notebook, it can't find the module:
ImportError: No module named 'displayD3caller'
What can I do to fix this?
Before Jupyter Notebook uses any change to the source code of related files, the kernal needs to be restarted. This includes creating a new file.
If you make a new module after Jupyter is already running, you need to restart the kernal before it will find it. From the top menu, select Kernal > Restart. You will need to re-execute any cells whose outputs you depend on.

Resources