Hi Is it possible to connect Azure Postgres SQL Database to PowerBI using Direct Query, I cant seem to find information regarding this.
Currently these are the only data sources supported by DirectQuery:
Amazon Redshift
Azure HDInsight Spark (Beta)
Azure SQL Database
Azure SQL Data Warehouse
Google BigQuery (Beta)
IBM DB2 database
IBM Netezza (Beta)
Impala (version 2.x)
Oracle Database (version 12 and above)
SAP Business Warehouse Application Server
SAP Business Warehouse Message Server (Beta)
SAP HANA
Snowflake
Spark (Beta) (version 0.9 and above)
SQL Server
Teradata Database
Vertica (Beta)
PostgreSQL is supported, but only in import mode. So no, you can't use DirectQuery with PostgreSQL (unless you write your own custom connector). You can vote for this idea though.
I'm working on a Custom Connector that will work for Direct Query from PostgreSQL through an ODBC driver. Working on a full write-up (this month when I get time) but until then I can just share the repo here:
DirectQuery for Postgres via ODBC
This is working for us to DirectQuery our Postgres data source via an Azure hosted Windows instance running the custom connector on a On-Premise gateway 24/7.
Related
I am thinking about using Snowflake as data warehouse. My databases are in Azure SQl Database and I would like to know what tools I need for etl my data from Azure SQL Database to Snowflake.
I think Snowpark could work for data transformations, but I wonder what other code tool could I use.
Also, I wonder if I use azure blob storage as staging area or snowflake has its own.
Thanks
You can use HEVO data a third-party tool where you can directly migrate data from Microsoft SQL Server to Snowflake.
STEPS TO BE FOLLOWED
Make a connection to your Microsoft SQL Server database.
Choose a replication mode.
Create a Snowflake Data Warehouse configuration.
Alternatively, You can use SnowSQL to Connect Microsoft SQL Server to Snowflake where you export data from SQL Server to SSMS, upload the same to either Azure storage or S3, and move the data from Storage to Snowflake.
REFERENCES:
Microsoft SQL Server to Snowflake
How to move the data from Azure Blob Storage to Snowflake
I am trying to read data from databricks delta lake via. apache superset. I can connect to delta lake with a JDBC connection string supplied by the cluster but superset seems to require a sql alchemy string so I'm not sure what I need to do to get this working. Thank you, anything helps
superset database setup
Have you tried this?
https://flynn.gg/blog/databricks-sqlalchemy-dialect/
Thanks to contributions by Evan Thomas, the Python databricks-dbapi
package now supports using Databricks as a SQL dialect within
SQLAlchemy. This is particularly useful for hooking up Databricks to a
dashboard frontend application like Apache Superset. It provides
compatibility with both standard Databricks and Azure Databricks.
Just use pyhive and you should be ready to connect to databricks thrift JDBC server.
I'm new to pyspark, so can you please suggest how to connect SQL DW from Pyspark using jupyter-notebook. I'm not using HDinsight or DataBricks.
I have setup the pyspark and Jupyter-note book using this link.
First, please make sure you have downloaded the Microsoft JDBC Driver for SQL Server from here (Download Microsoft JDBC Driver for SQL Server) and add it to your spark jar libraries path.
Second, it sounds like you setup the pyspark and Jupyter notebook on premise or on local. If that without on Azure cloud, you must add your client ip into your Azure SQL DW firewalls as the figure below, please refer to the section Create a server-level firewall rule of the offical document Quickstart: Create and query an Azure SQL data warehouse in the Azure portal to know more about that.
Next, you need to find the JDBC connection string of Azure SQL DW as the section Sample JDBC connection string of the document Connection strings for Azure SQL Data Warehouse said, you should see it from the tabs Overview or SQL databases on Azure portal.
Then, you can refer to the blog PySpark connection with MS SQL Server to try to connect Azure SQL DW via pyspark in your jupyter notebook.
Hope it helps.
Is it possible to connect ADF to an Oracle database on AWS as source and migrate data to an Azure SQL Server?
I've made some attempties and the result was always timeout.
The goal was achieved using an Integration Runtime, but didn't wanted to use it. I'd like a direct connection.
I've made some attempties and the result was always timeout.
Hi,Vinicius. Based on the Oracle connector,no more special properties you need to configure except host,user,password properties etc for Oracle database on AWS. You could check the supported versions of an Oracle database.
If you still have timeout issue, you could commit feedback to azure data factory to find an official statement.
Since you want to avoid IR, maybe you could refer to below solutions:
1.Try export data to AWS S3 and ADF supports AWS S3 connector as source.
2.Try to use Data Integration - Kettle tool to transfer data by jdbc driver.
Is it possible to connect to Azure Sql Server database from Access using 'Sql Server Native Agent' driver and Azure Active Directory authentication?
I can connect using Odbc driver for Sql Server but if in Access I have problem with date and datetime2 columns (they are seen by Access as text fields instead datetime)
This is a known issue of older SQL server ODBC drivers.
Use a more recent ODBC driver. An overview of versions can be found on This MS Docs page