Good day. I have a report wich consist of several textboxes and a tablix with row grouping. I have another two tablixes inside that group(perhaps I should merge them, since they consume the same dataset? They were separated due to the designer's requirements). Each group has a page break and group name(a critical condition- excel's sheets should be named). Those two tablixes consist of ~50 columns and two rows- one for headers and one for values. The largest possible dataset consist of 31 * 24 * 16 rows. All cells use an evaluation function for background color's value(it's not very complicated). And it takes ~15 seconds for SSRS to generate a preview for this report.
Using .NET LocalReport class this report is exported to the Excel. On my machine and on the development server it takes something about 20 seconds. On one of the client's machines it takes more than 15 minutes.
I've already removed all functions for aggregation. And there is only one merged column header in report. What else might help?
The performance issue can be because the ssrs and the database not in in the same network segment.
When you run from your own machine, remember to refresh the report twice or thrice time with the refresh button because ssrs cache can hide performance issue.
Another cause is run a subreport per row. The subreport rendering is very slow.
Related
I'm running into some scalability issues. Macro reports once took 20 seconds and now they take 3 minutes.
I have an excel sheet("databae") with about 100 columns and 10,000 rows. 90% of cells in each row are formulas.
When running the reports I have to re-calcuate the workbook/worksheet to update the data each loop. Just a single recalculation takes about 5 seconds so the time adds up quick.
However, I noticed that if I made the database full of static data(no formulas) the report is lightning fast at about 10 seconds.
So this makes me think I need a computer to just update the "database" with all the formulas every few mins and then I can import that static data in my main file and run reports in under 10 seconds.
Is that an ass backwards way to do this or is it a reasonable solution? What are better options?
It would be nice to keep the "database" in excel because we use some 3rd party addins instead of having to rewrite everything in sql and put in mysql database.
Why does PowerBI performance get effected so badly when adding DAX measures to a model?
I have a model with a couple hundred DAX measures and a report that contains a couple of these measures on them. This report runs fine as expected. There is 2 reports that we look at here, one in PowerBI and another in Excel connected to the PowerBI model.
I modified the underlying model to add a whole bunch of additional DAX measures somewhere in the range of 2000. Basically, I added different versions for Plan & Actuals for a number of rows in our P&L. Then created various comparisons i.e. vs LY, vs Plan, etc.
Now when I view the report in PowerBI it takes a bit longer to load and view the same report without any modifications to the report. The biggest problem is when I look at the report in Excel, it now takes multiple minutes for the report to refresh.
My question is: Why is refresh affected so much by this change even though it is not used anywhere. My understanding is that it wouldn't be affected by the additional measures that are not being used.
High-Level Problem: Excel doesn't seem to be able to handle some of the more complex dashboards I am try to build, and I am hoping for some suggestions on how to alleviate this issue.
My job/requirements: I build executive and sales facing dashboards and reports for our organization in excel through an OLAP cube connection - the tool has been great for running quick analysis - but some of the larger projects that I run through this connection often crash or take painstakingly long to refresh. Here are the requirements I have, and why I am using excel:
Need to manipulate/alter our data freely - for example, if I am pulling customer data, where I have all 50 states in the columns, and all 10,000+ customers in the rows, I need to be able to run a formula adjacent to the data where I am comparing that customer/state data to company wide customer/state data.
End product (dashboard/table) needs to be visually pleasing as the end users are our sales team/executives.
Need to be able to re-run/refresh the same analysis easily. Most of the dashboards I run get sent out on a daily or weekly cadence, so each refresh of the dashboard needs to take no more than a few minutes, ideally seconds.
- Need built in time intelligence functionality: I need the dashboard I am building to know what day it is. Excel has the = today() formula.
Symptoms of issue: Right now, when I run more complex queries, data refreshes can take upwards of 15 minutes. Some of these files are approaching north of 200kb. For these problem files, further development is painfully slow and inefficient, because even the simplest of formulas (like a vlookup column off of the OLAP table) forces my computer to use all 8 processors and hogs about 95% of the 32gb of RAM.
Specifics of the analysis I am running: Much of this analysis requires pulling tens (and sometimes hundreds) of thousands of rows, and these tables are sometimes dozens of columns wide, as I might be pulling sales data by state or by month. Some dashboards often have half a dozen of these OLAP tables - each with its own purpose. There is no way to reduce the amount of data that I am pulling - I need all of the data to reach the conclusions I am trying to reach.
For example, one of the projects I am currently working on requires me to take 24 months of sales data, broken out by state (as columns, on the x axis of the OLAP table) and broken out by individual customer (over 10,000) on the rows. This calculation alone takes forever to process and refresh, and causes all other subsequent formulas to run extremely slow, even simple vlookup columns adjacent to the OLAP cube.
Research I have done: I have looked into alternatives to excel, but the largest issue I face is that I still have a set of requirements that need to be met, and the obvious tools at my disposal don't seem to meet those requirements at first glance. Again, the end product of what I am building must be easily consumable for our end users (sales reps and execs). If I were to run a sql query through Microsoft SQL Server to pull the data I am looking for, I would need to be able to refresh the data on a daily basis - but more importantly, I would need to be able to easily manipulate and build out extra calculations and formulas off of the query. Excel seems like it is the only tool that checks those boxes, and yet it also feels like a dead end.
Here is the situation we have:
a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc
b) I have an Excel document that connects to the Access database and pulls data in to visualize it
As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours.
This operation is slow (~10 seconds) and seems to be redundant and inefficient.
Is there a better way to do this?
I am willing to go just about any route - just need directions.
Thanks in advance!
Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).
You need to find out if its the Pivot Table Refresh or the VLOOKUP thats taking the time.
(try removing the VLOOKUP to see how long it take just to do the Refresh).
If its the VLOOKUP you can usually speed that up.
(see http://www.decisionmodels.com/optspeede.htm for some hints)
If its the Pivot table Refresh then it depends on which method you are using to get the data (Microsoft Query, ADO/DAO, ...) and how much data you are transferring.
One way to speed this up is to minimize the amount of data you are reading into the pivot cache by reducing the number of columns and/or predefining a query to subset the rows.
I'm creating an Excel dashboard that imports a variable number months' worth of financial/accounting information from a database to an Excel sheet. Using this information I have a Calculations sheet that computes some financial indicators, again, month by month. Finally, this information is displayed in graphs on a separate sheet (one indicator per graph, with the monthly information plotted to see the tendencies). Currently I have written VBA code that formats the sheets to accomodate the number of months requested, pull the data from the SQL server, and update the graphs. Since there are 53 indicators for each operation (6 operations), this process takes about 3 minutes.
Does anyone recommend a better way to do this? The current way 'works' but I've often thought that there must be a more efficient way to do this.
Thanks!
Chris
You could look at skipping out the excel part and using SQL server reporting services (SSRS). If you have ever used business objects or crystal reports its kind of the same thing and I would imagine would offer better performance than doing things in excel.