Zeppelin\jupyter Notebook for KUDU - apache-kudu

We are trying to connect Zeppelin Notebook to KUDU via impala.
We didn't find any existing KUDU interpreters in addition we tried to find impala interpreters.
Any help would be appreciated
Rony

Related

Apache Spark Connector - where to install on Databricks

This Apache Spark connector: SQL Server & Azure SQL article from Azure team describes how to use this connector.
Question: If you want to use the above connector in Azure Databricks, where will you install it?
Remarks: The above article tells you to install it from here and import it in, say, your notebook using com.microsoft.azure:spark-mssql-connector_2.12:1.2.0. But it does not tell you where to install. I'm probably not understanding the article correctly. I need to use it in an Azure Databricks and would like to know where to install the connector jar (compiled) file.
You can do this in the cluster setup. See this documentation: https://databricks.com/blog/2015/07/28/using-3rd-party-libraries-in-databricks-apache-spark-packages-and-maven-libraries.html
In short, when setting up the cluster, you can add third party libraries by their Maven coordinates - "com.microsoft.azure:spark-mssql-connector_2.12:1.2.0" is an example of a Maven coordinate.

What happens exactly when setting spark.databricks.service.server.enabled to true on Databricks?

Can anyone explain what spark.databricks.service.server.enabled does exactly when it's set to true?
The only thing I can find in the documentation is that it should be set to true when using Databricks Runtime 5.3 or below, but I can't find an explanation of what's happening exactly under the hood (https://docs.databricks.com/dev-tools/databricks-connect.html)
I would be grateful for any helpful response.
Thanks,
Note: spark.databricks.service.server.enabled true helps you to work on a Databricks Cluster from a remote machine.
As part of cluster setup "spark.databricks.service.server.enabled true" helps Databricks Connect allows you to connect your favorite IDE (IntelliJ, Eclipse, PyCharm, RStudio, Visual Studio), notebook server (Zeppelin, Jupyter), and other custom applications to Azure Databricks clusters and run Apache Spark code.

static analysis for Databricks notebooks

We are using Databricks notebook with and Github as version Control. I am trying to do static analysis as either part of Github checks or as part of the Azure pipeline.
I am aware of linters available for Python and Scala. however, the challenge here is that one notebook could have Python, Scala, or a SQL code as well.
Using an linter might now work, so is there any way I could achieve this?

How can I read a XML file Azure Databricks Spark

I was looking for some info on the MSDN forums but couldn't find a good forum/ While reading on the spark site I've the hint that here I would have better chances.
So bottom line, I want to read a Blob storage where there is a contiguous feed of XML files, all small files, finaly we store these files in a Azure DW.
Using Azure Databricks I can use Spark and python, but I can't find a way to 'read' the xml type. Some sample script used a library xml.etree.ElementTree but I can't get it imported..
So any help pushing me a a good direction is appreciated.
One way is to use the databricks spark-xml library :
Import the spark-xml library into your workspace
https://docs.databricks.com/user-guide/libraries.html#create-a-library (search spark-xml in the maven/spark package section and import it)
Attach the library to your cluster https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster
Use the following code in your notebook to read the xml file, where "note" is the root of my xml file.
xmldata = spark.read.format('xml').option("rootTag","note").load('dbfs:/mnt/mydatafolder/xmls/note.xml')
Example :
I found this one is really helpful.
https://github.com/raveendratal/PysparkTelugu/blob/master/Read_Write_XML_File.ipynb
he has a youtube to walk through the steps as well.
in summary, 2 approaches:
install in your databricks cluster at the 'library' tab.
install it via launching spark-shell in the notebook itself.
I got one solution of reading xml file in databricks:
install this library : com.databricks:spark-xml_2.12:0.11.0
using this (10.5 (includes Apache Spark 3.2.1, Scala 2.12)) cluster configuration.
Using this command (%fs head "") you will get the rootTag and rowTag.
df = spark.read.format('xml').option("rootTag","orders").option("rowTag","purchase_item").load("dbfs:/databricks-datasets/retail-org/purchase_orders/purchase_orders.xml")
display(df)
reference image for solution to read xml file in databricks

HDInsight and Talend Open Studio for Big Data

I am currently working on a project in which I need to connect Talend open Studio for Big Data (v 6.3.1) to an Azure’s HDInsight (3.5) Hadoop Cluster. So far, I am trying a simple example which consists in creating an Hive Table.
For that, I am using the following diagram:
The hive connection was configured as followed:
… and please find below the specifications of the tHiveCreateTable_1 node:
By running this process:
· The specified container and deployment Blob is created (see image below) - which make me believe that everything is ok with the Windows Storage Configuration
· However the tHiveCreateTable_1 node has an error (see image below)
· I strongly believe that it´s something related with the Hostname and Port;
· I tried to use the host name of the cluster and the hostname of the Hive server that we can find in Ambari (see image below)
· But none of them worked as expected.
Has any one tried something similar to this?
Note: It seems reasonably important to say that the Azure version supported by Talend is 3.4, however, I am using 3.5, it might be it.
Many thanks for your help in advance.
According to the offical docuemnt about the differences between Hadoop components and versions available with HDInsight, HDInsight 3.5 is based on Hortonworks Data Platform(HDP) 2.5, but HDI 3.4 is based on HDP 2.4. However, there is not big version difference for their Hive componets or other componets. So, my suggestion is that you can try to create a HDI 3.4 using the same Azure Storage account for your current HDI 3.5, without more effects for your needs.

Resources