I am trying to install the Apache Spark Connector for SQL Server and Azure SQL to use transactional data in big data analytics and persists results for ad-hoc queries or reporting. The connector allows you to use any SQL database, on-premises or in the cloud, as an input data source or output data sink for Spark jobs.
The spark sql connector is located here https://github.com/microsoft/sql-spark-connector
Can someone let me know how to import it in Azure Synapse Apache Spark?
As per the conversation with Synapse Product Group:
You don’t need to add the connector Apache Spark connector jar files or any package com.microsoft.sqlserver.jdbc.spark to your Synapse Spark pool. The connector is there out of the box for Spark 2.4 and for Spark 3.1 it will be in production most likely in upcoming weeks.
For more details, refer to the Microsoft Q&A thread which addressing similar issue.
Related
I have no experience with Azure Synapse but my understanding is that is the same as Databricks, ADF, ADLS2 and Hive in SQL DWH, all together in one workspace with a different name.
Am I wrong?
Yes, in many context Azure Synapse and Databricks provide the same Big Data Analytics approach but there are also few differences between these services.
With the new functionalities in Synapse now, we see some similar functionalities as in Databricks (e.g. Spark, Delta) which raises the question on how Synapse compares to Databricks and when to use which.
Yes, both have Spark but…
Databricks
has a proprietary data processing engine (Databricks Runtime) built
on a highly optimized version of Apache Spark offering 50x
performance
already has support for Spark 3.0
allows users to opt for GPU enabled clusters and choose between standard and high-concurrency cluster mode
Synapse
Open-source Apache Spark (thus not including all features of Databricks Runtime)
has built-in support for .NET for Spark applications
Yes, both have notebooks
Synapse
Nteract Notebooks
has co-authoring of Notebooks, but one person needs to save the Notebook before another person sees the change
doesn’t have automated versioning
Databricks
Databricks Notebooks
Has real-time co-authoring (both authors see the changes in real-time) Automated versioning
Yes, both can access data from a data lake
Synapse
When creating Synapse, you can select a data lake which will be your
primary data lake (can query it directly from the scripts and
notebooks)
Databricks
You need to mount a data lake before using it
Yes, both leverage Delta
Synapse
Delta Lake is open source
Databricks
Has Databricks Delta which is built on the open source but offers some extra optimizations
No, they are not the same
Synapse
Has both a traditional SQL engine (to fit the traditional BI developers) as well as a Spark engine (to fit data scientists, analysts & engineers)
Is a data warehouse (i.e. Synapse Analytics) + an interface tool (i.e. Synapse Studio)
Databricks
Is not a data warehouse tool but rather a Spark-based notebook tool
Has a focus on Spark, Delta Engine, MLflow and MLR
No, they don’t offer the same developer experience
Synapse
Offers for Spark-development a developer experience currently only through Synapse Studio (not through local IDEs)
Doesn’t have Git yet integrated within the Synapse Studio Notebooks
Databricks
Offers a developer experience within Databricks UI, Databricks Connect (i.e. remote connect from Visual Studio Code, Pycharm, etc.) and soon Jupyter & RStudio UI within Databricks
Check When to use Synapse and when Databricks?.
If I read file from ADLS into PySpark data frame and write back to another ADLS folder in different file format, will that lineage captured in Hive metastore, Can lineage show for this kind of operations?
Currently this lineage won't show up out of the box - however, Purview uses Atlas behind the scenes, thus you can probably capture this lineage using the API.
Here's an example of where Spline was used to track lineage from notebooks:
https://intellishore.dk/data-lineage-from-databricks-to-azure-purview/
This article talks about how to get started with the Purview REST API:
https://techcommunity.microsoft.com/t5/azure-architecture-blog/exploring-purview-s-rest-api-with-python/ba-p/2208058
You can use the OpenLineage based Databricks to Purview Solution Accelerator to ingest the lineage provided by Databricks. By deploying the solution accelerator, you'll have a set of Azure Functions and a Databricks cluster that can extract the logical plan from a Databricks notebook / job and transform it automatically to Apache Atlas / Microsoft Purview entities.
Supports table level lineage from Spark Notebooks and jobs for the following data sources:
Azure SQL
Azure Synapse Analytics
Azure Data Lake Gen 2
Azure Blob Storage
Delta Lake
Supports Spark 3.1 and 3.0 (Interactive and Job clusters) / Spark 2.x (Job clusters)
Databricks Runtimes between 6.4 and 10.3 are currently supported
Can be configured per cluster or for all clusters as a global configuration
Once configured, does not require any code changes to notebooks or jobs
I am new to Spark Structured Streaming and its concepts. Was reading through the documentation for Azure HDInsight cluster here and it's mentioned that the structured streaming applications run on HDInsight cluster and connects to streaming data from .. Azure Storage, or Azure Data Lake Storage. I was looking at how to get started with the streaming listening to new file created events from the storage or ADLS. The spark documentation does provide an example, but i am looking for how to tie up streaming with the blob/file creation event, so that I can store the file content in a queue from my spark job. It will be great if anyone can help me out on this.
happy to help you on this, but can you be more precise with the requirement. Yes, you can run the Spark Structured Streaming jobs on Azure HDInsight. Basically mount the azure blob storage to cluster and then you can directly read the data available in the blob.
val df = spark.read.option("multiLine", true).json("PATH OF BLOB")
Azure Data Lake Gen2 (ADL2) has been released for Hadoop 3.2 only. Open Source Spark 2.4.x supports Hadoop 2.7 and if you compile it yourself Hadoop 3.1. Spark 3 will support Hadoop 3.2, but it's not released yet (only preview release).
Databricks offers support for ADL2 natively.
My solution to tackle this problem was to manually patch and compile Spark 2.4.4 with Hadoop 3.2 to be able to use the ADL2 libs from Microsoft.
I am trying to read data from databricks delta lake via. apache superset. I can connect to delta lake with a JDBC connection string supplied by the cluster but superset seems to require a sql alchemy string so I'm not sure what I need to do to get this working. Thank you, anything helps
superset database setup
Have you tried this?
https://flynn.gg/blog/databricks-sqlalchemy-dialect/
Thanks to contributions by Evan Thomas, the Python databricks-dbapi
package now supports using Databricks as a SQL dialect within
SQLAlchemy. This is particularly useful for hooking up Databricks to a
dashboard frontend application like Apache Superset. It provides
compatibility with both standard Databricks and Azure Databricks.
Just use pyhive and you should be ready to connect to databricks thrift JDBC server.
We are working on Azure ML and ADLS combination. Since HDInsight Cluster is working over ADLS, we are trying to use Hive Query and HDFS route and running into problems.
Request your help in solving the problem of reading data from hive query and writing to HDFS. Below is the error URL for reference:
https://studioapi.azureml.net/api/sharedaccess?workspaceId=025ba20578874d7086e6c495cc49a3f2&signature=ZMUCNMwRjlrksrrmsrx5SaGedSgwMmO%2FfSHvq190%2F1I%3D&sharedAccessUri=https%3A%2F%2Fesprodussouth001.blob.core.windows.net%2Fexperimentoutput%2Fccf9a206-730d-4773-b44e-a2dd8c6e87b9%2Fccf9a206-730d-4773-b44e-a2dd8c6e87b9.txt%3Fsv%3D2015-02-21%26sr%3Db%26sig%3DHkuFm8B2Ba1kEWWIwanqlv%2FcQPWVz0XYveSsZnEa0Wg%3D%26st%3D2017-10-16T18%3A31%3A06Z%26se%3D2017-10-17T18%3A36%3A06Z%26sp%3Dr
Azure Machine Learning supports Hive but not over ADLS.