I've my shared code in one package cell and I'm using that package to some other notebook cell. But everytime cluster restart, that package cell get destroyed and my notebook where I'm using that package doesn't find it and throws error like error: object abc is not a member of package com import com.abc.utility.Shared.
Code for package cell in separate notebook is like
package com.abc.utility
import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
object Shared {
def outputFilesOperation(dbfsMountPoint : String) {
//stuff here
}
Separate Notebook, where I'm using above package
import com.abc.utility.Shared
Shared.outputFilesOperation(dbfsMountPoint = "/mnt/abc")
You need to include the first notebook into notebook where you're using the object - you can do it via %run, in a separate cell: %run ./FirstNotebookName (see docs)
Related
I have one notebook1 in synapse ws with python code that contains some variable.
I have created another notebook2 and I'd like to use the variables from notebook1. I tried to import as a package, but it didn't work.
Has any one done that in the past.
You can get the variables from another Notebook by referencing using %run in present Notebook.
Example:
This is my Notebook2 in which I have defined two variables.
Then In Notebook1 reference the Notebook2 like below %run Notebook2.
When you execute it, you can access the variables from it.
Note: Make sure you publish the Calling Notebook(Notebook2) to access the recent changes in that Notebook.
and don't execute any code in %run cell otherwise it will error like below.
I want to be able to have the outputs/functions/definitions of a notebook available to be used by other notebooks in the same cluster without always have run the original one over and over...
For instance, i want to avoid:
definitions_file: has multiple commands, functions etc...
notebook_1
#invoking definitions file
%run ../../0_utilities/definitions_file
notebook_2
#invoking definitions file
%run ../../0_utilities/definitions_file
.....
Therefore i want that definitions_file is available for all other notebooks running in the same cluster.
I am using azure databricks.
Thank you!
No, there is no such thing as "shared notebook" that is implicitly imported. The closest thing you can do is to package your code as a Python library or into Python file inside Repos, but you still will need to write from my_cool_package import * in all notebooks.
I am trying to import other modules inside an Azure Databricks notebook. For instance, I want to import the module called 'mynbk.py' that is at the same level as my current Databricks notebook called 'myfile'
To do so, inside 'myfile', in a cell, I use the magic command:
%run ./mynbk
And that works fine.
Now, I would like to achieve the same result, but with using get_ipython().run_line_magic()
I thought, this is what I needed to type:
get_ipython().run_line_magic('run', './mynbk')
Unfortunately, that does not work. The error I get is:
Exception: File `'./mynbk.py'` not found.
Any help is appreciated.
It won't work on Databricks because IPython commands doesn't know about Databricks-specific implementation, and IPython's %run is expecting the file to execute, but Databricks notebooks aren't files on the disk, but the data stored in the database, so %run from IPython can't find it, and you get error.
Why log10() is failing to be recognized when called within a function definition in another script? I'm running Python3 in Anaconda (Jupyter and Spyder).
I've had success with log10() in Jupyter (oddly without even calling "import math"). I've had success with defining functions in a .py file and calling those functions within a separate script. I should be able to perform a simple log10.
I created a new function (in Spyder) and saved it in a file "test_log10.py":
def test_log10(input):
import math
return math.log10(input)
In a separate script (Jupyter notebook) I run :
import test_log10
test_log10.test_log10(10)
I get the following error:
"NameError: name 'log10' is not defined"
What am I missing?
Since I'm not using the environment of Jupyther and alike, I don't know how to correct it in these system, perhaps there is some configuration file over there,check the documentation.
But exactly on the issue, when this happens its because python has not "linked" well something at the import, so I suggest a workaround with the libs in the next way:
import numpy as np
import math
and when you are using functions from math, simply add the np. before, i.e.:
return math.log10(input)
to
return np.math.log10(input)
Exactly I don't know why the mismatch, but this worked for me.
I have a very simple .py file, displayD3caller.py:
def caller():
print("Here is a print statement from displayD3caller.py")
I can import and use the function defined in other files, and from the python shell. However, when I try to import the function in a Jupyter Notebook, it can't find the module:
ImportError: No module named 'displayD3caller'
What can I do to fix this?
Before Jupyter Notebook uses any change to the source code of related files, the kernal needs to be restarted. This includes creating a new file.
If you make a new module after Jupyter is already running, you need to restart the kernal before it will find it. From the top menu, select Kernal > Restart. You will need to re-execute any cells whose outputs you depend on.