How do I reduce/eliminate data imports in Power BI Desktop? - powerbi-desktop

I am a software developer that is new to the Power BI world. I have inherited a project that is making use of Power BI Desktop as a reporting platform to leverage the ability to generate ad hoc reports, including graphics.
We are using SqlServer 2016 as the data source. The data utilized for the reports is stored in the database as xml text. The xml is provided by a vendor and stored in the database. We want to read the xml from the database into XMLDATA, then decompose that into data tables (30+ tables) using queries. The XMLDATA data table is set to DirectQuery storage mode. All of the decomposed tables are set to Import storage mode with the query source set to XMLDATA.
When a refresh is done, each of the decomposed data table's data is being imported, which is taking a long time (2+ hours for 17500 xml records, and this is a small test set of records). It seems as though Power BI Desktop is reading all of the database rows for each of the tables instead of using the XMLDATA read in from the database. Is there a way to set up the decomposed tables so they use the XMLDATA instead of importing for each table? We are only using Power BI Desktop and none of the other Power BI products/services.

Related

What can be a Good Approach for ETL when data is on SharePoint Online and to be Used for Power BI?

I am working on a case where, we have a large volume of data. All the data is located in different Excel files which are having more than 500000 rows in each file. The files will gets added monthly.
I have created Power BI Report and each Month when the new file gets loaded, it takes a lot of time to refresh.
To make it simpler, we are trying to use SSIS where the data will be imported from the Excel files in to a DB and then the DB will be used as Base Data to the Power BI File.
In this case, we are downloading the data locally from cloud (SharePoint Online) and then again Uploading it to Cloud (to Power BI Premium Workspace.).
Guys, I am not sure whether this is the right approach and efficient way to optimize the Refresh Time. Can any please suggest if is there any better solution available. Perhaps we may have a solution which will consolidate all the data on the cloud only and so no need to download and transfer it it using SSIS.
Regards,

Switch Power BI data sources from Excel to Azure analysis services

Right now I have created power bi dashboard (using power bi desktop) which retrieves data from excel file. Later on I will replicate the same data model to Azure analysis services tabular model.
Is it possible to switch my power bi dashboard's data source to azure analysis service seamlessly?. What I mean is that I don't have to do major rework on my dashboard (re-create the visualization again, etc). How do I do that?
Thank you
According to me the hard fact is that it may not be seamless. There may be possibility of minimizing some extra work though but you need to answer few questions yourself and then decide:-
Do you plan to use SSAS Tabular in Power BI using live connection or import mode? (I assume you are probably having this cube as on-premise)
Is the data layout in the excel (understand it like flattened data) going to be same as in SSAS Tabular?
One option worth considering would be to have the SSAS Tabular cube readily loaded using the data from the excel and you start off the development of Power BI. That way source changes in Power BI will not be an issue going forward.
Hope this helps?

U-SQL Tables vs SQL Data Warehouse

So here's where I'm at.
I'm storing huge amounts of data in Data Lake Store. But when I want to make a report (it can be a month's worth), I want to schematize it into a table to refer to over and over again when querying upon it.
Should I just use the built in database feature that Data Lake Analytics provides by creating U-SQL tables (https://msdn.microsoft.com/en-us/library/azure/mt621301.aspx) or should I create this table in SQL Data Warehouse? I guess what I really want to know is what are the pros and cons of either case and when is it best to use either?
By the way, I'm a noob in this Microsoft Azure world. Still actively learning.
At this point it depends on what you want to do with the data.
If you need interactive report queries, then moving the data into a SQL DB or DW schema is recommended at this point until ADLA provides interactive query capabilities.
If you need the tables during your data preparation steps, want to use partitioning to manage data life cycles, need to run U-SQL queries that can benefit from the clustering and data distribution offered by U-SQL tables, you should use U-SQL tables.

CRM 2015: Archive options for Audit logs

We are using MS CRM 2015 and we are looking to know our options/best practice to archive audit logs. Any suggestion please? Thanks!
You can use the MSCRM Toolkit at http://mscrmtoolkit.codeplex.com/, which has a tool called Audit Export Manager to aid in archiving audit logs. The documentation for the tool is available at http://mscrmtoolkit.codeplex.com/documentation#auditexportmanager . The key items that this tool allows you to do is to do filtering by entities, Metadata, summary or detail, picking individual users, actions, and/or operations to include in your export. Exports can be limited to a particular date range and can be exported to CSV, XML, or XML spreadsheet 2003 format. Note that I've had trouble exporting with a couple of the formats, but typically get good results when exporting to CSV formats.
This is one of the tools I've found that gives you some flexibility when exporting audit records since Microsoft CRM allows you to filter the audit data, but doesn't provide a good built in means to export it.
You can try newest Stretch Database feature from SQL Server 2016:
Stretch Database migrates your cold data transparently and securely to the Microsoft Azure cloud.
Stretch warm and cold transactional data dynamically from SQL Server to Microsoft Azure with SQL Server Stretch Database. Unlike typical cold data storage, your data is always online and available to query. You can provide longer data retention timelines without breaking the bank for large tables like Customer Order History.
There is a helpful hands-on review SQL Server 2016 Stretch Database with very interesting SWITCH TABLE example.
Also there must be a solution with moving archived data from audit to separate filegroup. Take a look to Transferring Data Efficiently by Using Partition Switching:
You can use the Transact-SQL ALTER TABLE...SWITCH statement to quickly and efficiently transfer subsets of your data in the following ways:
Assigning a table as a partition to an already existing partitioned table.
Switching a partition from one partitioned table to another.
Reassigning a partition to form a single table.

Oracle view columns empty

We use commodity trading software linked to an Oracle database to export reports to Excel. I'm connecting to this Oracle database using PowerPivot as well as SQL developer. In doing so, I'm able to connect to Oracle directly creating live, refreshable reports which no longer need to be constantly exported.
I located an Oracle view responsible for generating one of the most important reports we export to Excel. What's strange is that all of the columns are completely empty. When I open it using PowerPivot or SQL Developer, I just see the headers which contain no data. It populates with data just fine when exported from our trading software however.
Does anyone know why this might be and how I can get this view to populate the data (using PowerPivot for example)?
Is this a materialized view I'm dealing with?
My first guess would be it has to do with permissions or row-level security on the view. Whether it is materialized view is impossible to determine from the data you've provided, but should make no difference in accessing the data from Power Pivot.
This is a question for your DBAs and unlikely a problem in Power Pivot.

Resources