Is there a way to execute a SQL query on Databricks SQL Warehouse using Rest API?
I can see in the documentation that there are APIs to create a query, but don't see any API to run a query.
Why do you want to use REST Api? Spark has always had a JDBC endpoint. Just send a valid Spark SQL query against the built Hive Tables. See documentation here.
Just query the interactive spark cluster that you leave up. I have not used the new SQL Data Warehouse version of Databricks. But I am sure there is something similar.
Right now (November 2022nd) there is no public REST API to perform query on the SQL Warehouse, but it's in the roadmap.
But you can write a small wrapper either around JDBC/ODBC, or even using connectors for Python/Go/Node.js to implement REST API.
Related
What is the data_source_id in the parameters of azure databricks ran SQL scripts.
Mentioned in databricks AWS documentation that it is the ID of the data source where this query will run, but if I try to change the notebook of and run the query, still the data_source_id is same.
data_source_id is an another identifier of the SQL Endpoint. It's not the same as the SQL endpoint ID that is used for operations on SQL Endpoint. Unfortunately that information isn't returned by the SQL Endpoint Get API, but instead should be searched via another API that isn't officially documented yet (coming very soon), but you can look into Databricks Terraform provider source code how to use it.
I am trying to see whether it is possible to access some data stored within a table in a dedicated SQL in Azure Synapse using REST API but I have not been able to figure much out. I checked the official docs at Microsoft and at most I have been able to query for a specific column within a table, not much more beyond that. I am wondering whether it is even possible to get data through the Azure Synapse REST API. Would appreciate any help.
Docs for reference: https://learn.microsoft.com/en-us/rest/api/synapse/
It is not possible to access the data in Synapse Dedicated SQL pools using REST APIs, today it is possible only to manage compute using REST API.
I can see SQL tab on spark history server which has all the ** query execution plans** in details section but there are no rest APIs mentioned to get them as Json object using code in spark monitoring and instrumentation page.
Link: https://spark.apache.org/docs/latest/monitoring.html
for example, I can get easily get job and stage data as json using http:localhost:18080/applications/[app-id]/jobs and http:localhost:18080/applications/[app-id]/stages rest APIs.
How to get execution plans as json object using rest APIs?
I am trying to read data from databricks delta lake via. apache superset. I can connect to delta lake with a JDBC connection string supplied by the cluster but superset seems to require a sql alchemy string so I'm not sure what I need to do to get this working. Thank you, anything helps
superset database setup
Have you tried this?
https://flynn.gg/blog/databricks-sqlalchemy-dialect/
Thanks to contributions by Evan Thomas, the Python databricks-dbapi
package now supports using Databricks as a SQL dialect within
SQLAlchemy. This is particularly useful for hooking up Databricks to a
dashboard frontend application like Apache Superset. It provides
compatibility with both standard Databricks and Azure Databricks.
Just use pyhive and you should be ready to connect to databricks thrift JDBC server.
In the Azure environment, I have an Azure SQL Db and a CosmosDb Graph. Using an Azure Data Factory, I
need to insert/update data from the Sql db to the GraphDb.
My thinking is that I need to first transform the data to json and from there insert it into the GraphDb.
Is this the way to go? Are there any other ways?
Thank you.
1.Based on the ADF copy activity connector and the thread: How can we create Azure's Data Factory pipeline with Cosoms DB (with Graph API) as data sink ? mentioned by #silent,Cosmos db graph api connector is not supported in ADF so far. You could vote up this feature in this feedback link which is updated at April 12, 2019.
2.Cosmos db migration tool isn't a supported import tool for Gremlin API accounts at this time. Please see this link:https://learn.microsoft.com/en-us/azure/cosmos-db/import-data
3.You could get an idea of graph bulk executor .NET library now.This is the sample application:git clone https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started.gi