What is the data_source_id in the parameters of azure databricks ran SQL scripts.
Mentioned in databricks AWS documentation that it is the ID of the data source where this query will run, but if I try to change the notebook of and run the query, still the data_source_id is same.
data_source_id is an another identifier of the SQL Endpoint. It's not the same as the SQL endpoint ID that is used for operations on SQL Endpoint. Unfortunately that information isn't returned by the SQL Endpoint Get API, but instead should be searched via another API that isn't officially documented yet (coming very soon), but you can look into Databricks Terraform provider source code how to use it.
Related
Is there a way to execute a SQL query on Databricks SQL Warehouse using Rest API?
I can see in the documentation that there are APIs to create a query, but don't see any API to run a query.
Why do you want to use REST Api? Spark has always had a JDBC endpoint. Just send a valid Spark SQL query against the built Hive Tables. See documentation here.
Just query the interactive spark cluster that you leave up. I have not used the new SQL Data Warehouse version of Databricks. But I am sure there is something similar.
Right now (November 2022nd) there is no public REST API to perform query on the SQL Warehouse, but it's in the roadmap.
But you can write a small wrapper either around JDBC/ODBC, or even using connectors for Python/Go/Node.js to implement REST API.
Is there any equivalent is available for 'sp_refreshview' in Azure Synapse Dedicated SQL Pool?
When i tried , it shows below error.
Since VIEWS WONT GET UPDATED AUTOMATICALLY, is there any other command or system stored procedure available in Azure Synapse Analytics's Dedicated SQL Pool other than 'ALTER VIEW' Approach?
Try this:
EXEC sys.sp_refreshsqlmodule 'dbo.YourViewName';
See the documentation here.
I am trying to see whether it is possible to access some data stored within a table in a dedicated SQL in Azure Synapse using REST API but I have not been able to figure much out. I checked the official docs at Microsoft and at most I have been able to query for a specific column within a table, not much more beyond that. I am wondering whether it is even possible to get data through the Azure Synapse REST API. Would appreciate any help.
Docs for reference: https://learn.microsoft.com/en-us/rest/api/synapse/
It is not possible to access the data in Synapse Dedicated SQL pools using REST APIs, today it is possible only to manage compute using REST API.
I am trying to find an API or service to fetch the metadata of tables in Azure SQL database. However, I can't find anything. I only have an API that gets metadata of a database.
There are no Azure ARM APIs for reading and writing from a database.
To read the metadata you must connect to the database with a SQL Server client and issue metadata queries, like
select *
from sys.tables
etc. You can easily do this with PowerShell, SQLCMD, or mssql-cli.
Is it possible to connect ADF to an Oracle database on AWS as source and migrate data to an Azure SQL Server?
I've made some attempties and the result was always timeout.
The goal was achieved using an Integration Runtime, but didn't wanted to use it. I'd like a direct connection.
I've made some attempties and the result was always timeout.
Hi,Vinicius. Based on the Oracle connector,no more special properties you need to configure except host,user,password properties etc for Oracle database on AWS. You could check the supported versions of an Oracle database.
If you still have timeout issue, you could commit feedback to azure data factory to find an official statement.
Since you want to avoid IR, maybe you could refer to below solutions:
1.Try export data to AWS S3 and ADF supports AWS S3 connector as source.
2.Try to use Data Integration - Kettle tool to transfer data by jdbc driver.