Automating the execution of a Jupyter notebook in Azure - azure

I have a script in the Jupyter notebook, which creates interactive graphs for the data set that it is provided. I then convert the output as an HTML file without inputs to create a report for that dataset to be shared with my colleagues.
I have also used papermill to parametrize the process, that I send it the name of the file and it creates a report for me. All the datasets are stored in Azure datalake.
Now it is all very easy when I am doing it in my local machine, but I want to automate the process to generate reports for the new incoming datasets every hour and store the HTML outputs in the azure datalake, I want to run this automation on the cloud.
I initially began with using automation accounts, but I didnot know how to execute a jupyter notebook in the automation accounts, and where to store my .ipynb file. I have also looked at the jupyter hub server (VM) on azure cloud but I am unable to understand how to automate it as well.
Can any one help me with a way to automate the entire process on the Azure Cloud in the cheapest way possible, because I have to generate a lot of reports.
Thanks!

Apart from Automation, you can use Azure Functions as mentioned in this document:
· To run a PowerShell-based Jupyter Notebook, you can use PowerShell in an Azure function to call the Invoke-ExecuteNotebook cmdlet. This is similar to the technique described above for Automation jobs. For more information, see Azure Functions PowerShell developer guide.
· To run a SQL-based Jupyter Notebook, you can use PowerShell in an Azure function to call the Invoke-SqlNotebook cmdlet. For more information, see Azure Functions PowerShell developer guide.
· To run a Python-based Jupyter Notebook, you can use Python in an Azure function to call papermill. For more information, see Azure Functions Python developer guide.
References: Run Jupyter Notebook on the Cloud in 15 mins #Azure | by Anish Mahapatra | Towards Data Science, How to run a Jupyter notebook with Python code automatically on a daily basis? - Stack Overflow and Scheduled execution of Python script on Azure - Stack Overflow

Related

Create Azure databricks notebook from storage account

We have python script stored in Azure storage account in blob. We want to deploy / create this python script (as notebook) in azure databricks cluster so later we can run Azure data factory pipeline and pipeline can execute notebook created/deployed in databricks.
We want to create / deploy this script only one time as and when its available in blob.
I have tried to search over the web but couldn't find proper solution for this.
Is it possible to deploy/create notebook from storage account? if yes, how?
Thank you.
You can import notebook into Databricks using the URL, but I expect that you won't make that notebook public.
Another solution would be to use a combination of azcopy tool with Databricks CLI (workspace sub-command). Something like this:
azcopy cp "https://[account].blob.core.windows.net/[container]/[path/to/script.py" .
databricks workspace import -l PYTHON script.py '<location_on_databricks>'
You can also do it completely in notebook, combining the dbutils.fs.cp command with Databricks's Workspace REST API, but that's could be more complicated as you need to get personal access token, base64 the notebook, etc.
We can use databricks API 2.0 to import python script in databricks cluster.
Here is the API definition: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/workspace#--import

Recommended approach for scheduling a python script within Azure and pushing to database

I have a python script that pulls data from an API, performs some transformations and finally spits out the data in a csv file.
Currently I use windows scheduler to perform this task daily.
I would like to further automate this task and have it sit within an azure environment that will run the script overnight on schedule but also push the results to an azure database.
I already have an azure subscription including a number of databases.
Two approaches I have read about are:
Virtual Machine within azure. Use the windows scheduler within the VM to run the script and push to database
Use azure web apps to run the script and push to database. (no VM needed)
I was hoping someone could recommend the more efficient approach to doing this?
I would strongly recommend to look at Azure Functions.
It is basically serverless architecture. It will allow you to host your existing code without Virtual Machine or WebAPP.
It can be configured to run at your requirements and times.
https://azure.microsoft.com/en-us/services/functions/
You can set the connection string to your sql envrionment and connect to your sql database to push the data.

Access databricks secrets in pyspark/python job

Databricks secrets can be accessed within notebooks using dbutils, however since dbutils is not available outside notebooks how can one access secrets in pyspark/python jobs, especially if they are run using mlflow.
I have already tried How to load databricks package dbutils in pyspark
which does not work for remote jobs or mlflow project runs.
In raw pyspark you cannot do this. However if you are developing a pyspark application specifically for Databricks then I strongly recommend you look at Databricks-connect.
This allows access to parts of dbutils including secrets from an ide. It also simplifies how you access storage so that it aligns with how the code will run in production.
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/databricks-connect

Custom Script in Azure Data Factory & Azure Databricks

I have a requirement to parse a lot of small files and load them into a database in a flattened structure. I prefer to use ADF V2 and SQL Database to accomplish it. The file parsing logic is already available using Python script and I wanted to orchestrate it in ADF. I could see an option of using Python Notebook connector to Azure Databricks in ADF v2. May I ask if I will be able to just run a plain Python script in Azure Databricks through ADF? If I do so, will I just run the script in Databricks cluster's driver only and might not utilize the cluster's full capacity. I am also thinking of calling Azure functions as well. Please advise which one is more appropriate in this case.
Just provide some ideas for your reference.
Firstly, you are talking about Notebook and Databricks which means ADF's own copy activity and Data Flow can't meet your needs, since as i know, ADF could meet just simple flatten feature! If you miss that,please try that first.
Secondly,if you do have more requirements beyond ADF features, why not just leave it?Because Notebook and Databricks don't have to be used with ADF,why you want to pay more cost then? For Notebook, you have to install packages by yourself,such as pysql or pyodbc. For Azure Databricks,you could mount azure blob storage and access those files as File System.In addition,i suppose you don't need many workers for cluster,so just configure it as 2 for max.
Databricks is more suitable for managing as a job i think.
Azure Function also could be an option.You could create a blob trigger and load the files into one container. Surely,you have to learn the basic of azure function if you are not familiar with it.However,Azure Function could be more economical.

Passing the user name from ADF to notebooks

I want to pass the logged in user name from Azure Data Factory to azure notebook.
I tried dbutils functionality but no luck.
x=str(dbutils.notebook.entry_point.getDbutils().notebook().getContext().tags().apply('user'))
print (x)
Tried above code in notebook. When I am running the notebook directly it is giving the expected result. But from ADF it is not working
In short, you can't.
In DataFactory when you create linked service to Databricks you specify personal access token to ADF. This means all requests made from ADF are executed as that user, but there is currently no user logged in the databricks platform and as such no context is initialized.
In ADF there isn't even user variable because pipelines are designed (expected) to be executed via schedule or event triggers or external systems. Not by users manually.
Here is the list of available system variables
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-system-variables

Resources