We have a requirement in which we need to load around 4000 records in two separate editor tree grids and highlight the differences in each record after doing comparison using values from a particular column in each tree . Everything's fine with a limited number of records but when we go up to 4000 records or more we have huge issues with the data. The tree grid takes around 10 minutes to render as it includes expanding all nodes, calculations to construct the parent child relation and then the highlight.
One solution I considered was trying a similar approach to Live Grid but for the highlighting logic we need all the records as the third record in Grid 'A' may match the 115th record in Grid 'B'.Live Grid would not have the previous selections when it brings the next set of records.
Considering the above, what would be the best way of achieving this? Can I just keep adding new records to the store as I scroll down ? I think it could be done by tracking the scroll position without using the Live Grid but am not sure how to achieve this. I'm not even sure if it's the right approach . Could anybody provide me some sample code to add elements to the store when user reaches the end of vertical scroll in EditorTreeGrid or suggest a better way to achieve this? My trial to add a scroll listener and a listener somehow doesn't kick in .
Also, the Live Grid uses List Store whereas I use the EditorTreeGrid . How do I effectively populate it to a tree store? I used to do getAllModels before and populate them into the TreeStore . Is it the right way to do this ?
In the end we ended up dumping the Tree structure and overrode the LiveGrid and LiveGridView to achieve this . LiveGrid does not load the complete data to the UI but tracks the scroll and brings in data on need basis.
Related
I'm using the Graph visual in Azure Monitor Workbooks. The problem is that with every refresh the layout looks randomly differently.
Is there anyway to programmatically fix the layout (i.e. the positioning of the nodes)?
1)
after only hitting refresh
Unfortunately, there's no way to guarantee it. The portal's graph control is relatively limited in this regard.
One thing you can possibly do is to make sure that your query has a predictable order to the rows, so that the rows always are in the same order. If the nodes are in the same order and edges are in the same order in the results, hypothetically it should lay out the same every time.
I am working on a project that requires fast performing proximity queries on a database with location data.
In my database I want to store locations with additional information. Idea is that user opens a map on a certain location and my program only fetches the markers visible to the user. If I plan on having millions of values, fetching markers from NYC when I'm zoomed in on London would make the map activity work extremely slow and the data I send back from the db would be HUGE.
That's why when the user opens the map I want to fetch all the markers that are for example in 10km distance from the center of the map. (I'm okay with fetching markers outside of the visible area. I just don't want to fetch markers that are 100km away)
After a thorough research I chose the S2 Geometry Library approach with Hilbert's space filling curve.
The idea of mapping a 2D value to one integer value, where the longer a shared prefix between two indexes is, the spatially closer they are together, was a big selling point.
I need my database to be able to perform this SELECT query lightning fast and I expect to have A LOT of data in the future so operating on only one column is a big plus.
Also the thing that intrigued me the most was the ability to perform fast proximity searches because of the fact that two numbers that are close to each other on the map will have 1D indexes also close to each other.
The idea looks very simple (If I don't miss anything).
The thing I'm having problems with is how to (If it's even possible) pick the min value and max value on the 1D plane to be sure I'm scanning the whole visible area.
Most of the answers and tutorials I find on the internet propose a solution where you take a bounding area full of smaller S2 index "boxes" and then scan every index in the database to see if it's contained in one of the "boxes" from the array. This is easy to do but when you have 50 milion records it's not possible to go through every single one of them to see if it's in on of the "boxes".
What I have in mind is a solution where you take the minimum value of the area and the maximum value of the area you're searching in and you perform something in the lines of SELECT (...) WHERE s2cellid BETWEEN min AND max
For example I'm in a location 47194c and want to fetch all markers in 10km distance so I take a value that's x to the left of the indeks and a value that's x to the right of the index and perform a BETWEEN 47194c-x AND 47194c+x query
Is something like that possible with the S2 library?
If no then what approach should I take to make my queries as quick as possible?
Thanks in advance :)
[I plan on using PostgreSQL]
Is it possible to use React Virtualized when there is no notion of a row index to get rowdata?
I would like to use React Virtualized to display data coming from a large (100k+ rows) database-table that is constantly being modified: rows are added/deleted/updated at random positions in the table.
I have no function that can get a row by using a row index because the position of every row is changing every few seconds.
The table is sorted and every row is guaranteed to have a unique content, so what I do have are the following functions:
getFirst/LastRow() => data : get the data content for the (currently) first/last row
getNext/PreviousRows(startData, nrRows) => data[] : get the data content for the (currently) next/previous nrRows, starting at row with content startData
findRow(data) => data : find the row that has content data
I also have an observer function that is tracking the table mutations in real-time, so I can get a callback for every insert/delete/update operation for the table.
Is there a way to map these available functions to a workable React Virtualized configuration ?
Is it possible to use React Virtualized when there is no notion of a row index to get rowdata?
No. Mapping an index (or indices) to data is core to how react-virtualized works. You would need to build/maintain some structure that allowed you to efficiently access data at an index (even if that index frequently changed) in order to benefit from the lib.
When the data changes, every few seconds, is it usually items being appended onto one end of the remote collection? Or could it be resorting, deletions, etc?
The linked-list style API you describe almost seems like it was meant more to work with a pagination control.
I think the answer as asked might be "Yes" if the semantics of
to use React Virtualized
can be understood to mean where RV is "a piece of the puzzle". I think this was essentially stated very well and accurately with the opposite answer in terms of RV itself having
no notion of a row index
RV does need to know about its own row indexes. But if RV is implemented, and simply handed data at whatever event interval (i.e. pagination) from a "parent"... it is possible to use RV w/o it having an understanding of the parent's row index being used
to get rowdata
You could think of your paginated app as the parent. The dataGrid component as the child. The state/render cycle in your parent React app just needs to re-render the dataGrid (and it's scroller) at the spot the user will find intuitive in relation to your data "pages".
I've been modeling a similar solution with a different "dataGrid" component, and following various mindshare on the issue. At the risk of [more] tl;dr, my current technique is like this:
feed a total rowcount to the dataGrid component, so it sets up as if I'm going to give it the whole api in one un-paginated shot
feed only the data from the currently available data page (say 500 records)
dataGrid's onScroll event calculates a conditional threshold for me to cross (scrollTop, etc.) indicating that the list has scrolled to the top or bottom row of the current page as my parent application understands it
onScroll calculates rows and pages, gets new data, and sets new state
new state has the same huge rowcount, but along with different data rows from a API call
use should/did update lifecycles in React to:
"scrollTo" and reposition the component grid to whichever row my
parent understands as the top of right "page", and
(warning... prototype) I concatenate my paged data array with padding (cringe) like [...Array({#ofRows*#ofpages})].concat(newData), so that the child component leaves the drag handle where the user expects just before I render. Ultimately, I am hoping this will be done by adjusting the height of the top or bottom-most divs in my list to compensate for the intended scrollbar position, but it's an evolving work in its priorities.
I've also maybe found a macrocosm of the same issues in my own pagination that the windowing libraries solve with over-scanning rows. It seems I must "overscan" in my API calls, so that I have a buffer outside the windowing component's own top/botom. This may well go away with better logic in the end. Fingers crossed.
References I am finding helpful:
Some more history for the discussion using this library is here.
Also, react-window-paginated looks to have solved the basics of the problem your comments describe using the newer react-window lib. This solution is very compelling to me as a code design of where I might end up in my final composition. RW doesn't leave you with RV's feature set, though, but it's easily extendable and maybe easier to start here if you don't need tons of RV features out of the gate.
If you were to tackle the pagination solution in RV, this might stimulate a prototype, though it does not solve the API pagination or mapping/threading issue itself.
My PyQt application pulls data from third party API calls. The dataset returned usually contains in the neighborhood of hundreds of items. On occasion, the dataset returned contains in the tens of thousands of items. On those occasions, the interface is painfully slow to display - too slow to be useful.
To speed things up, I would like to load less of the data during the initial load. I would like to be able to populate the interface based on the scrollbar handle position. I would prefer that the scrollbar have the correct range as soon as the widget is displayed, but as the user scrolls, the data that they should be seeing is populated into the widget (a QTreeWidget in this case). This is to say that I'd rather the user didn't have to scroll to the bottom of the widget to load more data at the bottom & therefore change the range of the scroll bar.
I believe QSqlTable behaves this way out of the box, but because I'm not relying on sql queries (and because some of the columns' data is calculated by the GUI), I don't believe I can use that module. Is this possible with QTreeWidget and w/o direct sql calls?
There is built-in functionality for this in Qt model-view framework. See QAbstractItemModel.canFetchMore() and QAbstractItemModel.fetchMore() here
Oh, I've just realised you aren't using MVF but stand-alone QTreeWidget instead. If you are dealing with large data and require such a functionality, a switch to MVF may be a right thing to do.
I would like to implement a GUI handling a huge number of rows and I need to use GTK in Linux.
I started having a look at GTKTreeView with lists but I don't think that adding millions of lines directly to that widget will help in having a GUI that doesn't slow the application.
Do you know whether there is a GTK widget already in place for this problem or do I have to handle my self the window frame that that must display those lines? Eventually I would write the data directly using GtkDrawingArea (essentially writing a new widget).
Any suggestion about any GTK topic or project I can look as starting point for my research?
As suggested in the comments, you can use the Cell Data Func, and get the displayed data under contro. But I have another idea: Millions of lines are much much more than any amount of information a human user can see and understand. So maybe a better, more usable and user-friendly solution, is to diaplay the data in a way the users can more easily navigate in it.
Imagine opening a huge hierarchy, scrolling down, and forgetting what were the top-level items you opened.
Example for a possible solution: Have a combo box which allows to choose some filter or category, and this can reduce the amount of data to a reasonable amount the user can more easily navigate and make a mental model of it if necessary.
NOTE: As far as I know, GtkTreeView doesn't support sorting/filtering and drag-n-drop at the same time, so if you want to use both features, I suggest you use the existing drag-n-drop functionality (otherwise very complicated to implement by hand) and implement your own sorting/filtering.