How do we test notebooks that use delta live table - databricks

I cannot execute the delta live table code in the notebook. I always have to create a DLT pipeline by going into workflows tab. Is there a easy way to test the delta live table code in notebook
Thanks

Debugging Delta Live Table pipelines is challenging. Luckily Souvik Pratiher has created an open-source library for debugging Delta Live Table notebooks on regular Databricks clusters.

Related

Writing data to datastore using jupyter notebook on Azure Ml studio

Hi I have prepared some data from saved table from Datastore on Jupyter notebook from Azre ML studio. Now, I want to write the prepared data back to a datastore using the same notebook.
Please help me with some examples.
Note: Here i have connected my ADLS Gen2 to datastore.
Integration work include enabling all datastore types to be consumable by data prep/dataset. This is very important as data prep/dataset is the engine that powers the data ingestion story for AzureML and being able to support all datastore types is crucial in making this a reality. Runs that involves reading and writing to datastore using data prep/dataset.
The table below presents what we currently support.

Delta Lake independent of Apache Spark?

I have been exploring the data lakehouse concept and Delta Lake. Some of its features seem really interesting. Right there on the project home page https://delta.io/ there is a diagram showing Delta Lake running on "your existing data lake" without any mention of Spark. Elsewhere it suggests that Delta Lake indeeds runs on top of Spark. So my question is, can it be run independently from Spark? Can I, for example, set up Delta Lake with S3 buckets for storage in Parquet format, schema validation etc, without using Spark in my architecture?
You might keep an eye on this: https://github.com/delta-io/delta-rs
It's early and currently read-only, but worth watching as the project evolves.
tl;dr No
Delta Lake up to and including 0.8.0 is tightly integrated with Apache Spark so it's impossible to have Delta Lake without Spark.

Read Azure Synapse table with Spark

I'm looking for, with no success, how to read a Azure Synapse table from Scala Spark. I found in https://learn.microsoft.com connectors for others Azure Databases with Spark but nothing with the new Azure Data Warehouse.
Does anyone know if it is possible?
It is now directly possible, and with trivial effort (there is even a right-click option added in the UI for this), to read data from a DEDICATED SQL pool in Azure Synapse (the new Analytics workspace, not just the DWH) for Scala (and unfortunately, ONLY Scala right now).
Within Synapse workspace (there is of course a write API as well):
val df = spark.read.sqlanalytics("<DBName>.<Schema>.<TableName>")
If outside of the integrated notebook experience, need to add imports:
import com.microsoft.spark.sqlanalytics.utils.Constants
import org.apache.spark.sql.SqlAnalyticsConnector._
It sounds like they are working on expanding to SERVERLESS SQL pool, as well as other SDKs (e.g. Python).
Read top portion of this article as reference: https://learn.microsoft.com/en-us/learn/modules/integrate-sql-apache-spark-pools-azure-synapse-analytics/5-transfer-data-between-sql-spark-pool
maybe I misunderstood your question, but normally you would use jdbc connection in Spark to use data from remote database
check this doc
https://docs.databricks.com/data/data-sources/azure/synapse-analytics.html
keep in mind, Spark would have to ingest data from Synapse tables into memory for processing and perform transformations there, so it is not going to push down operations into Synapse.
Normally, you want to run SQL query against source database and only bring results of SQL into Spark dataframe.

Can AWS Glue crawl Delta Lake table data?

According to the article by Databricks, it is possible to integrate delta lake with AWS Glue. However, I am not sure if it is possible to do it also outside of Databricks platform. Has someone done that? Also, is it possible to add Delta Lake related metadata using Glue crawlers?
This is not possible. Although you can crawl the S3 delta files outside the databrics platform but you won't find the data in the tables.
As per the doc, it says below :
Warning
Do not use AWS Glue Crawler on the location to define the table in AWS Glue. Delta Lake maintains files corresponding to multiple versions of the table, and querying all the files crawled by Glue will generate incorrect results.
It is finally possible to use AWS Glue Crawlers to detect and catalog Delta Tables.
Here is a blog post explaining how to do it.
I am currently using a solution to generate manifests of Delta tables using Apache Spark (https://docs.delta.io/latest/presto-integration.html#language-python).
I generate a manifest file for each Delta Table using:
deltaTable = DeltaTable.forPath(<path-to-delta-table>)
deltaTable.generate("symlink_format_manifest")
Then created the table using the example below. The DDL below also creates the table inside Glue Catalog; you can then access the data from AWS Glue using Glue Data Catalog.
CREATE EXTERNAL TABLE mytable ([(col_name1 col_datatype1, ...)])
[PARTITIONED BY (col_name2 col_datatype2, ...)]
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION '<path-to-delta-table>/_symlink_format_manifest/' -- location of
the generated manifest
It would be better if you could clarify what do you mean by saying "integrate delta lake with AWS Glue"..
At this moment, there is no direct Glue API for Delta lake support, however, you could write customized code using delta lake library to save output as a Delta lake.
To use Crawler to add meta of Delta lakes to Catalog, here is a workaround . The workaround is not pretty and has two major parts.
1) Get the manifest of referenced files of the Delta Lake. You could refer to Delta Lake source code, or play with the logs in _delta_log, or use a brutal method such as
import org.apache.spark.sql.functions.input_file_name
spark.read.format("delta")
.load(<path-to-delta-lake>)
.select(input_file_name)
.distinct
2) Use Scala or Python Glue API and the manifest to create or update table in Catalog.
AWS Glue Crawler allows us to update metadata from delta table transaction logs to Glue metastore.
Ref - https://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html#crawler-delta-lake
But there are a few downsides to it -
It creates a symlink table in Glue metastore
This symlink-based approach wouldn't work well in case of multiple versions of the table, since the manifest file would point to the latest version
There is no identifier in glue metadata to identify if given table is Delta, in case you have different types of tables in your metastore
Any execution engine which access delta table via manifest files, wouldn't be utilizing other auxiliary data in transaction logs like column stats
Yes it is possible but only recently.
See the attached AWS Blog entry for details on this just announced capability.
https://aws.amazon.com/blogs/big-data/introducing-native-delta-lake-table-support-with-aws-glue-crawlers/

Big Data Analytics using Redshift vs Spark, Oozie Workflow Scheduler with Redshift Analytics

We want to do Big Data Analytics on our data stored in Amazon Redshift (currently in Terabytes, but will grow with time).
Currently, it seems that all our Analytics can be done through Redshift queries (and hence, no distributed processing might be required at our end) but we are not sure if that will remain to be the case in future.
In order to build a generic system that should be able to cater our future needs as well, we are looking to use Apache Spark for data analytics.
I know that data can be read into Spark RDDs from HDFS, HBase and S3, but does it support data reading from Redshift directly?
If not, we can look to transfer our data to S3 and then read it in Spark RDDs.
My question is if we should carry out our Data Analytics through Redshift's queries directly or should we look to go with the approach above and do analytics through Apache Spark (Problem here is that Data Locality optimization might not be available)?
In case we do analytics through Redshift queries directly, can anyone please suggest a good Workflow Scheduler to write our Analytics jobs with. Our requirement is to be able to execute jobs as a DAG (Job2 should execute only if Job1 succeeds, etc) and be able to schedule our workflows through the proposed Workflow Engine.
Oozie seems like a good fit for our requirements but it turns out that Oozie cannot be used without Hadoop. Does it make sense to set up Hadoop on our machines and then use Oozie Workflow Scheduler to schedule our Data Analysis jobs through Redshift queries?
You cannot access data stored on Redshift nodes directly (each via Spark), only via SQL queries submitted the cluster as a whole.
My suggestion would be to use Redshift as long as possible and only take on the complexity of Spark/Hadoop when you absolutely need it.
If, in the future, you move to Hadoop then Cascading Lingual gives you the option of running your existing Redshift analytics more or less unchanged.
Regarding workflow, Oozie is not a good fit for Redshift. I would suggest you look at Azkaban (true DAG) or Luigi (uses a Python DSL).

Resources