Power BI SQL Connection - powerbi-desktop

I have a Power BI desktop report which extracts data from the database server through SQL.
Once complete, is it possible for me to disconnect the SQL DB connection but retain the data already extracted in the tables (of course, without further refresh). Thanks

Yes,once you import the data it will get saved with pbix file.

Related

Connection vs Query in Excel data model

What is the difference between a 'connection' and a 'query' as Excel defines it in their data model? For example, if I load a csv file from local, it stores that as a query (it shows a 'select *' as the query against the file). What then would be considered a connection and what's the difference between the two? The only thing I can think of is that a connection would not return data without specifying the table/query to use-- for example, a database connection where it has multiple tables, and possibly, connecting to another Excel file if it had more than one tab.
Reference: https://support.microsoft.com/en-us/office/create-edit-and-manage-connections-to-external-data-89d44137-f18d-49cf-953d-d22a2eea2d46
Every query is also a connection, but not all connections are queries.
Connections have been in Excel for a long time, typically associated with commands that access data source like SQL server etc. The page you linked to has more links at the bottom. You may want to read up there.
The term "query" is now typically associated with Power Query, where the data connection to the data source is made via the Power Query engine, and then further refined in the query editor.
So, each query (Power Query) also has a connection, but you can only edit Power Queries in the Power Query editor, whereas legacy connections can be edited in the properties dialog of the connection.
Edit: Let's put it this way: The connection is just that. It connects your workbook to a data source. Like a highway connecting two cities.
A query is the request for actual data that you spell out, calling from your workbook (via the connection) into the data source. The data source then sends the data back (via the connection). The mechanics of asking for, receiving, and manipulating the received data (for e.g. cleaning it up, storing it in the workbook) is what the query does, but it can't do this without the connection. The query is the actual traffic on the highway.
Before Power Query, you could also connect to SQL Server and return data. The query details are visible in a tab in the connections dialog, so connection and query were used synonymously. These legacy data tools are now hidden by default and must be activated in the Excel Advanced options.
With Power Query, the brand name influences the use of the terminology. The term "query", more often than not now means Power Query, whereas some people may use "connection" (which is always a part of any query) for old style, legacy data connections (which also contain queries).
However, when you use Power Query, each of these queries will use connections. These are established when you first create the query. Your workbook may have a number of connections to different data sources. The credentials for each data source are stored with the connections (on your computer), not in the Power Query. This is like your toll fee for the highway. By storing the credentials with the connection, you establish the permission to use the connection and it doesn't matter how many people you bring back in your bus.
You can even use the same connection (to a specific SQL Server) for several different queries. When you create the first query to the SQL Server, you are prompted for credentials for that new connection (your toll for the highway). When you create another query to the same SQL Server, the connection already exists and you are not prompted for your credentials.
You can drive your bus along that same highway several times and pick up people from different suburbs of the city that the highway connects you to.
Your highway toll fee is only valid for a limited time. You can take as many trips as you want, but it will expire after some time. (This happens with SharePoint credential after 90 days, after which you have to provide your credentials again. Don't know about SQL Server, though.)
When you send a workbook with a query to SQL Server to someone else, they need to provide their own credentials in order to use the connection. Your toll fee does not cover their bus.
I'm going to stop now before this turns into a children's book.
Hope it helps.
In addition, Connections is a dynamic link and can be set to enable on:
background refresh
when the file opened
refresh every X minutes
or
Refresh when the queries are refreshed.
However, Query is a more static link and needs to be refreshed manually to load the latest data.

Azure Stream Analytics Job cannot detect either input table or output table

I'm new to the Azure Stream Job, and I want to use the reference data from Azure SQL DB to load into Power BI to have streaming data.
I've set up the storage account when setting up the SQL input table. I test the output table (Power BI) which is also fine, no error.
I tested both input table and output table connection, both are successfully connected, and I can see the input data from Input preview.
But when I tried to compose the query to test it out, the query cannot detect either input table or the output table.
The output table icon also grey out.
Error message: Query must refer to as least one data stream input.
Could you help me?
Thank you!!
The test query portal will not allow you to test the query if there are syntax errors. You will need to correct the syntax (as seen by yellow squiggles) before testing.
Here is a sample test query without any syntax error messages:
Stream Analytics does require to have at least one source coming from one of these 3 streaming sources: Event Hubs, IoT Hub, or Blob/ADLS. We don't support SQL as a streaming source at this time.
Using reference data is meant to augment the stream of data.
From your scenario, I see you want to get data from SQL to Power BI directly. For this, you can actually directly connect Power BI to your SQL source.
JS (Azure Stream Analytics)

connection problem when sendign xml data from nodejs application to mssql database SP

I have been using MS SQL package for connecting nodejs api with MS SQL database. When i execute sp that accepts XML as input i got an connection lost error. As I thought XML data is large so there might be a problem with the connectionTimeOut so i increase the connectiontimeout but still got an error. But I didn't get an connection error when i execute other SP. what's the reason for this connection lost error ?
In config object in I have increased the connectionTimeOut value to 30000.
If AUTO_CLOSE option enable on database level, set it to OFF, you can use following query.
ALTER DATABASE YourDB SET AUTO_CLOSE OFF;
Once it's set to OFF (select name, is_auto_close_on from sys.databases). Try to run sp again.
In-case it's the same issue, you need investigate on your procedure to identity which part of query, Tables, Table's Indexes are causing the issue..

Spark Thrift Server force metadata refresh

I'm using spark to create a table in the hive metastore, then connect with MSSQL to the Spark Thrift server to interrogate that table.
The table is created with:
df.write.mode("overwrite").saveAsTable("TableName")
The problem is that every time, after I overwrite the table (it's a daily job) when I connect with MSSQL I get an error. If I restart the Thrift Server it works OK but I want to automate this and restarting the server every time seems a bit extreme.
The most likely culprit is the Thrift cached metadata which is no longer valid after the table overwrite. How can I force Thrift to refresh the metadata after I overwrite the table, before it's accessed by any of the clients?
I can settle for a solution for MSSQL but there are other "clients" to the table, not just MSSQL. If I can force the metadata refresh from spark (or linux terminal), after I finish the overwrite, rather than ask each client to run a refresh command before it requests the data, I would prefer that.
Note:
spark.catalog.refreshTable("TableName")
Does not work for all clients, just for Spark
SQL REFRESH TABLE `TableName`;
Works for Qlick but again, if I ask each client to refresh it might mean extra work for Thrift and mistakes can happen (such as a dev forgetting to add the refresh).

Redshift to Azure Data Warehouse CopyActivity Issue - HybridDeliveryException

Facts:
-I am running an Azure Data Factory Pipeline between AWS Redshift -> Azure Data Warehouse (since Power BI Online Service doesn't support Redshift as of this posts date)
-I am using Polybase for the copy since I need to skip a few problematic rows.
I use the "rejectValue" key and give it an integer.
-I made two Activity runs and got different errors on each run
Issue:
Run no:1 Error
Database operation failed. Error message from database execution : ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.BooleanWritable,Source=.Net SqlClient Data Provider,SqlErrorNumber=106000,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=106000,State=1,Message=org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.BooleanWritable,},],'.
Run No:2 Error
Database operation failed. Error message from database execution : ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message= ,Source=.Net SqlClient Data Provider,SqlErrorNumber=106000,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=106000,State=1,Message= ,},],'.
Below is the reply from Azure Data Factory product team:
Like Alexandre mentioned, the error #1 means you have a text valued column on the source Redshift where the corresponding column in SQL DW has type bit. You should be able to resolve the error by making the two column types compatible to each other.
Error #2 is another error from Polybase deserialization. Unfortunately the error message is not clear enough to find out the root cause. However, recently the product team has done some change on the staging format for Polybase load so you should no longer see such error. Do you have the Azure Data Factory runID for the failed job? The product team could take a look.
Power BI Online Service does support Redshift, through ODBC and an On-Premises Data Gateway (https://powerbi.microsoft.com/en-us/blog/on-premises-data-gateway-august-update/). You can install the latter on a Windows VM in Azure or AWS.
Redshift ODBC Drivers are here: http://docs.aws.amazon.com/redshift/latest/mgmt/install-odbc-driver-windows.html
Otherwise, your error indicates that one column of your SQL DW table does not have the expected data type (you probably have a BIT where a CHAR or VARCHAR should be.

Resources