I'm planning on rendering a list that contains nested tables.
If, for example, a row happens to contain a table containing 100s of items, would react-virtualized break up that row? Or would it render the entire row no matter its height, therefore defeating the purpose of virtualized infinite scroll?
Looks like the answer is a no:
https://github.com/bvaughn/react-virtualized/issues/980
Related
I'm not able to take the means for a large dataset given that the amount of attributes is irregular.
I have posted a simplified case for the problem. It explains the problem very well.
An idea that I came up with: Make a filter to condition on a single attribute. However, still, I don't see a way to do this in an efficient way (other then doing it all by hand).
see excel file:
All help is much appreciated.
I'm basically looking for a function/method to achieve taking means of all different attributes conditioned on each person for a large dataset without doing it by hand.
You can use AVERAGEIFS() inside an IF:
=IF(OR(A2<>A1,B2<>B1),AVERAGEIFS(C:C,A:A,A2,B:B,B2),"")
the ifrst part of the if tests whether the row starts a new group either by the person or the attribute changing. Then it uses AVERAGEIFS() to return the correct average of that group. otherwise it returns a blank
What you want to do can be accomplished very simply with a pivot table.
Simply select one of the cells inside the range of data you want to process(See the video for general use of a pivot table https://www.youtube.com/watch?v=iCiayB6GrpQ )
go the insert tab and insert pivot table.
Once you have it, simply check people, attribute, and values. Then drag people and attribute into rows, drag valut into the values window, select the drop down list and change it from sum of value to average and you should be done. https://i.stack.imgur.com/nYEzw.png
I have been working on a small project. I am trying to display all the results in the same row without NULL values. I've written a small expression to remove the Null values already "=IIF(IsNothing(Fields!RegisterNo.Value),True,False)". However, the rows seem to be moving one level down as it is displayed in the picture ResultMatrix1. I want the results to be on the same level. Can you please tell me if this is possible and how I can achieve it. Is it something to do with the groupings or something else?
Design Groupings
By default, when you create a table, there is a Row Group called "Details" that is not actually grouped by anything. This causes it to produce one row for each row from the dataset. Since you are trying to group these, you need to make sure that innermost group is grouped by your Staff Ref No.
In the lower-right cell, you may need to change the expression to use a Max function. This will simply avoid arbitrarily showing blanks when they happen to be sorted before a real value within that group.
I'm working on automating removing some rows of data from multiple files. One of the criteria is to exclude all products except a specific list. Currently I'm doing that by iterating through the rows and deleting rows that aren't for the right product.
I'm thinking that sorting the rows so that the products to be kept are first, and the ones to be deleted are clumped together lower down in the data would be faster, as I could then just find the first bad row and clear contents (as opposed to Range("<XX:XX>").Delete Shift:=xlUp).
The trouble I'm having is that the products actually present in any of the files varies, as does the list of products to be kept. It's pretty much a unique list of products to keep for each of over a dozen files.
So what I'm hoping is that there is a way when I specify the custom sort list such that I can have a single item for all other products I didn't explicitly list.
For example, if I wanted sort the alphabet "V, C, R, ", would there be a way to do so without specifically listing every other letter in the alphabet?
I'm avoiding specifying the remainder of the list for two reasons. One, the list is fairly long and retyping the full list for each of the files I need to sort would be pretty error prone and hard to maintain. Two, the product list is not necessarily static going forward, so I don't want to have to update the macro each time a new product is added.
I'm not sure I fully understood your question, but perhaps using a column with a formula might help?
It can be automated, by coding to add a column, set the formulas in the newly created range, and clear or sort based on a filter on the new range.
I hope someone can help me come up with an algorithm.
Im still very new with Apache POI and I was assigned to come up with an algorithm on how to read a template (Excel) and extract the headers/column names from the data itself.
The following must be taken into account:
There can be multiple headers/column names in just one sheet of an Excel file.
Headers can be horizontal AND/OR vertical in nature. This means that there could be a mixture of vertical and horizontal headers in one sheet.
Headers dont necessarily have to be at the very first row of the file. There could be introductions or banner images there.
The system must allow ANY kind of Excel format, so there is no control over the formatting of the cells, the naming convention, etc.
Some headers are alphanumeric in nature, which means it also contains numbers.
Some cells are merged to make room for a specific header.
Any ideas and suggestions are very much welcome. Just let me know if you have further clarifications.
(I know nothing about Apache, but some about Excel Interop working)
If the sheets to be detected are yours, I'd recomend NAMING those header cells. (To name a cell in Excel, there's a field at the top left of the screen, where normally the cell coordinates appear (like "A1" or "B2" and so...). Type a name in that place, and you will be able to identify that cell via code by it's name. ( 'Worksheet.Range("Name")' is where you get those cells via code)
To manage names, go to "Insert - Names" or "Formulas - Name manager", depending on what version of excel.
(Personally, I never work with sheets via code without naming headers, then I use "Offset" to get the data cells corresponding to those headers - This allows me to freely edit the sheet later without breaking the code)
If the sheets aren't yours, then, you'll need to find out the extents of the data. (Last row and last column)
Then check for the first line that contains all columns filled, none of them blank. That's a probable horizontal header.
As well as check for the first columns that contains all lines filled. That's a probable vertical header.
You could, as well, search for completely blank lines and/or columns to find headers that are AFTER some data, in case of sheets containing multiple horizontal headers, or vertical.
You could use some formatting properties (Range.Interior or Range.Font for examples) of those cells to identify if they are headers (usually headers have different format, color, borders and so on).
If you're sure there's no numeric header, I mean, all headers contains text, check for the type of data in the cells. If all are strings, header probability increases.
Even so, that's a tricky thing to do, if sheets don't follow some pattern, once in a while one of them can deceive your code and bring false results. I'd recommend, if alowed, to add a human verification to confirm the results after the proccess is done.
The solution to this problem involves taking away two of these freedoms. Such constraints applied will make this a tractable problem. Most of such freedoms come from overcautious thinking.
The freedoms are given as quotes below:-
Headers can be horizontal AND/OR vertical in nature. This means that there could be a mixture of vertical and horizontal headers in one sheet.
Typically, vertical headers are not used in Excel Files where there is a need to programmatically detect headers. As the primary, most common and sometimes the only reason for such detection is to upload/transform the tabular data.
Funny things happen when vertical headers are introduced:
They become Labels of Forms. This implies that such forms are used for data entry rather than storage. The data from such forms is stored in horizontal/columnar headers and rowwise/vertical records of data . Thus obviating the need for Upload/Transformation of the data entry sheet.
Excel is designed to have only horizontal headers. Vertical Headers cease to have autofilter support.
Even when Vertical Headers are present, a top horizontal header row can still be introduced to mark the headers themselves as descriptions / categories.
Staying true, to the core need for autodetection of headers, we can state that once our requirement states that Headers can be placed only in a horizontal alignment, the solution becomes slightly more tractable but not fully so.
Some cells are merged to make room for a specific header.
Merging cells is poison and anathema to the entire reason for transformation/upload of data. This is a pill I steadfastly have refused to take in my entire career with Excel & SQL jugglery. You may kindly merge all that you want to for all I care, however thee shall not pass into my beloved SQL Server.
For aforementioned reasons of prejudice and ill-will towards all mergers and mergees alike. I'd respectfully suggest that you too take this course.
Solution
Staying true to the above requirements after taking away the 2 freedoms. The pseudo algorithm (solution) is to
Take a sample of say c x r Excel Rows. For eg: 200 x 201 rows and columns
Find the counts of non-empty cells using an inbuilt formula like COUNTA whose contents have a non-zero length. The Count of such non-empty cells in each row is maintained as a data structure.
The type of data ie:- Number, Date, String should also be maintained in the above data structure capable of expressing the following:
Row# 22 contains
30 non-empty cells of which
28 are alphanumeric,
1 is a Date and
1 is a Number.
The First specific row that contains the maximum number of such non empty cells with the maximum number of strings should very likely be the header row.
Converting all of the above to a specific algorithm in any given language should be a deliciously occupying task for any young developer in their prime.
I'm not even sure how to ask this.
I have a database, where each row is a person. Columns are contact info, phone, etc. One column is 'date visited'. There can be multiple dates visited for each person. I don't want to use a comma or stack them all in one field.
Is there a way to have a 'nested' list (not a drop-down menu - just a list of visited dates for each person), such that one person still only consumes one single row?
Yes,
To accomplish this give each person an ID that is unique and won't change.
Then on a separate sheet, store the ID and date.
main sheet ( ID, Name, Contact Info, phone, ect)
second sheet ( ID, date visited)
In database theory this is called a 'one to many' relationship, and what i'm describing is called 'normalizing your dataset'.
In Excel you can now use formulas to manipulate the data however you need to or can imagine after you split this apart.
As you mentioned in comment, counting all visited dates for a user.
On the main sheet to the right you could use:
=countif(Sheet2!A:A,Sheet1!A1)
This would Count all of the ID's in the second sheet that match the current row's ID on your main sheet.
Notes about using one cell:
Storing all the dates in one cell will eventually max it out, and will make it hard ot view/search as it grows so i highly advise against this approach.
If however you insist on keeping the dates in there, you could count the visits by counting the total number of comma's + 1 liek this =(LEN(G1) - LEN(SUBSTITUTE(G1,",","")))+1 This formula takes the length of all the dates, and the length of dates with commas removed and subtracts them to get a number of occurrences.
Notes about using multiple columns:
This approach has the same idea as the one I suggested, where we are associating a number of dates with the row's identity of a person. However, there are a few key limitations and drawbacks.
The main difference is that when we abstract the dates by transposing them to extend vertically we can manipulate them easier, and make a list of 20 dates for one person much easier to read. By transposing the dates vertically in the second sheet instead of using this approach we also gain the ability to use Excel's built in filter. Just storing large amounts of data is useless by itself. While storing it in a way that you can view and manipulate easy makes everything much more powerful.