I have reached the number of 20 metrics in GDS. I am using report to do some aggregation and calculation on data and then download it to Excel file. To add more metrics I am using optional metrics but every time I refresh report I have to manually click all squares before I would be able to download this file. How can I deal with it?
AFAIK, Google Data Studio doesn't have a limit of 20 metrics.
Some visual component may have this limit. If that's the case, there isn't anything that can be done.
You can try to add multiple visuals and position them close to each other, so users will think they're the same component. Like in the picture bellow:
Notice there are two tables (the first one selected), but they were positioned in a way that users will think there is only one.
Related
I currently have approximately 10M rows, ~50 columns in a table that I wrap up and share as a pivot. However, this also means that it takes approximately 30mins-1hour to download the csv or much longer to do a powerquery ODBC connection directly to Redshift.
So far the best solution I've found is to use Python -- Redshift_connector to run update queries and perform an unload a zipped resultset to an S3 bucket then use BOTO3/gzip to download and unzip the file, then finally performing a refresh from the CSV. This resulted in a 600MB excel file compiled in ~15-20 mins.
However, this process still feel clunky and sharing a 600MB excel file among teams isn't the best either. I've searched for several days but I'm not closer to finding an alternative: What would you use if you had to share a drillable table/pivot among a team with a 10GB datastore?
As a last note: I thought about programming a couple of PHP scripts, but my office doesn't have the infrastructure to support that.
Any help would or ideas would be most appreciated!
Call a meeting with the team and let them know about the constraints, you will get some suggestions and you can give some suggestions
Suggestions from my side:
For the file part
reduce the data, for example if it is time dependent, increase the interval time, for example an hourly data can be reduced to daily data
if the data is related to some groups you can divide the file into different parts each file belonging to each group
or send them only the final reports and numbers they require, don't send them full data.
For a fully functional app:
you can buy a desktop PC (if budget is a constraint buy a used one or use any desktop laptop from old inventory) and create a PHP/Python web application that can do all the steps automatically
create a local database and link it with the application
create the charting, pivoting etc modules on that application, and remove the excel altogether from your process
you can even use some pre build applications for charting and pivoting part, Oracle APEX is one examples that can be used.
Similar to the project I am working on, this website has a search bar at the top of its home page:
On the linked website, the search bar works seemingly immediately when visiting the website. According to their own website, there have been roughly ~20K MLB players in MLB history, and this is a good estimate for the number of dropdown items in this select widget.
On my project, it currently takes 10-15 seconds to make this fetch (from MongoDB, using Node + Express) for a table with ~15mb of data that contains the data for the select's dropdown items. This 15mb of data is as small as I could make this table, as it includes only two keys (1 for the id, and 1 for the name for each dropdown). This table is large because there are more than 150K options to choose from in my project's select widget. I currently have to disable the widget for the first 15 seconds while the data loads, which results in a bad user experience.
Is there any way to make the data required for the select widget immediately available to the select when users visit, that way the widget does not have to be disabled? In particular:
Can I use localstorage to store this table in the users browser? is 15MB too big for localstorage? This table changes / increases in size daily (not too persistent), and a table in localstorage would then be outdated the next day, no?
Can I avoid all-together having to do this fetch? Potentially there is a way to load the correct data into react only when a user searches for that user?
Some other approach?
Saving / fetching quicker for 15mb of data for this select will improve our react app's user experience by quite a bit.
The data on the site you link to is basically 20k in size. It does not contain all the players but fetches the data as needed when you click on a link in the drop-down. So if you have 20Mb of searchable data, then you need to find a way to only load it as required. How to do that sensibly depends on the nature of the data. Many search bars with large result sets behind them will use a typeahead search where the user's input is posted back as they type (with a decent debounce interval) and the search results matching the user's input sent back in real time (usually with a limit of, say, the first 20 or 50 results).
So basically the answer is to find a way to only serve up the data that the user needs rather than basically downloading the entire database to the browser (option 2 of your list). You must obviously provide a search API to allow that to happen.
I am working on a Qt application in Python 3.6.5 using PySide2 v5.6.0. This application is a tool to be used to label many tens of thousands of images for training neural networks and has a qtablewidget for viewing information pertaining to each image.
I have implemented filtering by label status/name/etc, and when a filter is applied, I am clearing the table with mainWin.tableWidget.clearContents() and re-populating it with the new entries. I have tested this on a few hundred images, and it clears the table in less than a second; however, when the details for more than a few thousand image files have been loaded into the table, the program hangs on the clearContents() method. I don't know if it will eventually finish, but I have waited over 30 minutes in some cases and it has never cleared the table. One of my testers has reported that if you do wait long enough, it will eventually filter on large data sets, but obviously this is not a viable solution.
Trying clear() and setRowCount(0) give the same result.
The only thing special about the table is that I have a few signals hooked up to it to display images when a user clicks on a new row, but that is about it, and when I disabled those, it still would hang.
Here is my question: is there some way to quickly clear the rows or maybe filter them with some built in qtablewidget function?
Also, I feel I should point out that I am not storing image data in the table. All that I am storing in each cell is about 10-30 characters of text. Images are loaded from the disk when they are selected and are not stored in RAM.
Thank you.
In one of my sharepoint sites I have a document library with 7,000 document sets, if I count the files inside there are 21000 files in total.
In the beginning we had some views, but when they growth we had list view threshold issues. What I did was to remove some of those views and use search results webparts to get the results the user wants. For me incrementing the threshold is not a solution because this document library grows fast, (2K per month)
This solved the problem for some time.
However, some users do require to export to excel to do pivots based on this data, the only way I can think of is using reporting services in integrated mode with sharepoint, because I can export reports to excel and then they can do pivots.
The question is, will I have the same threshold problem when I make a report based on list data?
What other options do I have?
I have exported files to excel with 600,000+ rows. If the data you are pulling out reaches a certain size you will have to transition to .csv files as excel too has a threshold. The main issues you will run into on very large datasets are timeout issues that can be managed by configuring your http and ssrs timeouts, however, this will lead to other issues including long running reports of 15+ minutes and bandwidth usage.
I would recommend testing your scenario with a simple report to pull your data and see where the limit is reached. Also, look into some filtering mechanisms using parameters to reduce the amount of data returned to the client. If it becomes unmanageable then you may want to look into SSIS or some of the data-warehousing features. SSRS also has cached reporting that can place the processing burden to off hours if real-time data is not a necessity.
I need to give users the option to download a report into Excel.
The report includes about 70,000 records (and about 50 fields).
The report generates within about 1.5 minutes, but when the user tries to download, nothing happens.
(In Explorer, a second tab opens and the progress wheel keeps turning...)
I have tried on a filtered list of about 10,000 and it downloads quickly without issue.
However, at 30,000 it already has issues.
I tried from this from the server itself, so network issues were ruled out.)
I also tried sending out the report by email subscription, but this also failed.
Is there a way to go around this size limitation?
Or to give the user similar flexibility on the reports server itself, as he would have in Excel, without building in every possible filter into the report itself. (Probably too big a question here.)