Tabulator - Getting Columns including order and size - tabulator

I am creating a table using Tabulator, which seems great and very powerful.
I want a way to save relevant data of the table so it can e recreated on the fly.
Currently, I think there are a few things I need...
The row data - I get this using table.getData();
The columns - I get this using table.getColumnDefinitions();
The row data seems perfect I can store that and use it. However, the column information I am saving doesnt appear to have the size of the columns if I have resized them?
Is there a way of getting ALL the relevant column info, so I can save and recreate it exactly?
Alternatively, if there's a single 1 function that saves everything (row data, columns (including order, size etc)) in one go as a JSON or something that may be handy

So you have a few options here.
Config Persistence
If you simply want the table to be the same way it was the last time the user used it on that computer, you could look at using the Peristent Configuration module. This will store a copy of the table column configuration on the browsers local storage so that next time they load the page it will be laid out the same.
Column Layout
If you want to store it externally then you are correct,
the column width is not updated in the definition after a user changes it.
If you want to get the current layout of the columns then you can use the getColumnLayout function to retrieve the current layout of columns:
var columnLayout = table.getColumnLayout();
Though this will only contain the key layout characteristics and not the full definition, you would need to merge them if you wanted to store them in one place.
More details on this method can be found in the Manual Column Layout Documentation

Related

Excel - List of key values created from external files in power query, trouble with editing mapped values

I am attempting to create a standardized list of names for a long list of free typed values in a list of csv's pulled from Jira.
What I have tried so far has been to use Get Data -> From File -> From Folder
And then narrow it down to just the column I need and then remove all duplicate rows.
After loading that, I have tried adding a column that's just an empty string. I have done this both in power query and in the data model with the same effect. I want to have the second column so the user can map the values in the key column on a worksheet. This table will be used as a map for pivot tables to standardize names. Attempting to update the value in a worksheet and then refreshing to see that change in the data model just reverts the value back to an empty string.
Obviously i'm going about this the wrong way. The goal is to be able to maintain this key, value map over the months as new keys are added to it and just have to map those new entries rather than having to do a lot of work with comparing every time to see whats new. Is there a better way to achieve what I am trying to do and still maintain it being expandable over the months without having to redo the entire workbook?

Table doesn't expand when adding new data (from .csv files in my case)

Lets say I got some external .csv files which I got updated and I just need to hit the refresh button in Power Query to make some magic - that works fine, BUT, there some columns which are information about some parts, and I need to lookup values for them in another .csv file. What I did here is, I didnt convert all 4 Columns in a Table, but I separated them, each column has another name (table name) because I had some issues with refreshing from Power Query, and seemed easier to do calculation first and then convert to table.. maybe that was not smart tough??
My question is and issue actually, I am not getting new rows with new data beneath my "tables" I must drag it down to populate. Why that occurred?
These are functions I used from starting Column:
=INDEX(Matrix[[#All];[_]];ROW())
Then others are just lookup ones depending which info I am looking for:
=INDEX(variantendb[Vartext];MATCH(C2;variantendb[Variante];0))
And last column and calculation is concatinating to have Info name and Code together:
='0528 - info'!$D2 & " "& "("&'0528 - info'!$C2&")"
And of all of them I made in 5x Tables SEPARATELY, not as one table. Maybe I should do with one table, and then do the calculations and then it will be dynamically updated?
It is automatically updated only when I add new data somewhere in the middle of .csv but not when is in a last row, then it is not expanding!
Well, I solved it. How? Using Power Query at its best, I played around and actually gave me complete another approach to my problem, using Merge function and a bit of formatting. Works flawlessly, with minimum functions afterwards. What is important it refreshes in a milisecond - PROPERLY!!!
I am amazed by PQ and its functionality.

How to make a file in Excel, that refreshes from Query Editor and work on it

I am using Power Query Editor to create a working file, using multiple tables from several sources.
After I combine these and make my working file, I am using it to make some work on columns I add later on the working file.
I have noticed that the values I enter in the working file are not bound to the main key, lets assume the first column, but they are independent values in a column.
The result is that if one table changes, for example one line is deleted or I change the sorting of the Query, my working file is wrong, since the data changed but the added columns remain as they were.
Is there a way to have the added columns to be bound with a value, as it is for example with VLOOKUP?
How can I make a file that will update from different sourcesbut stil I can work on it without the risk of misplacing the work I do.
I hope I am clear.
Thank you in advance!
This is fairly simple if each line in your table is unique (which in your example you say the first column can serve as a key). Setup your working columns on the table and then load the table into PQ (as a connection only). Then go to your original query that is combining your data and add a merge at the end where you merge against the table you just loaded into PQ and match on your key. Then expand only your working columns from the merge.
This way whenever you refresh your table, it will match lines against it's existing output in your work before updating, so data in your work columns will be maintained. However note this is only going to retain values, not any formulas you may be using in your work columns.

Azure Data Pipeline Copy Activity loses column names when copying from SAP Hana to Data Lake Store

I am trying to copy data from SAP Hana to Azure Data Lake Store (DLS) using a Copy Activity in a Data Pipeline via Azure Data Factory.
Our copy activity runs fine and we can see that rows made it from Hana to the DLS, but they don't appear to have column names (instead they are just given 0-indexed numbers).
This link says “For structured data sources, specify the structure section only if you want map source columns to sink columns, and their names are not the same.”
We are fine using the original column names from the SAP Hana table, so it seems like we shouldn't need to specify the structure section in our dataset. However, even when we do, we still just see numbers for column names.
We have also seen the translator property at this link, but are not sure if that is the route we need to go.
Can anyone tell me why we aren't seeing the original column names copied into DLS and how we can change that? Thank you!
UPDATE
Setting the firstRowAsHeader property of the format section on our dataset to true basically solved the problem. The console still shows the numerical indices, but now includes the headers we are after as the first row. Upon downloading and opening the file, we can see the numbers are not there (the console just shows them for whatever reason), and it is a standard comma-delimeted file with a header row and one row entry per line.
Example:
COLUMNA,COLUMNB
aVal1,bVal1
aVal2,bVal2
We can now tell our sources and sinks to write and expect this format when reading.
BONUS UPDATE:
To get rid of the numerical indices and see the proper column headers in the console, click Format in the top-left corner, and then check the "First row is a header" box toward the bottom of the resulting blade
See the update above.
The format.firstRowAsHeader property needed to be set to true

How to extend data source for pivot table?

I'm synchronizing data with an external database and the result is delivered to me on a sheet called All Items. Setting up a pivot table is easy. However, on the horizontal axis I need to display a custom value, computed from one of the columns in the externally linked data set.
When I go to Analyze -> Change Data Source, I can see that the currently regarded data area is called 'All Items'!Query. I'd like to extend it by a column or two, so that my pivot table can display these values as well.
So, instead of 'All Items'!Query as the data source I'd like to have 'All Items'!Query and the next two columns too. I have no idea how to approach it nor what to try. Suggestions would be warmly appreciated.
I tried to define my own area called 'All Items'!Query_and_stuff but the number of records retrieved during synchronization varies, so my extension needs to take that into the account. No idea how.
Define
'All Items'!Query_and_stuff = offset('All Items'!Query, 0,0 ROWS('All Items'!Query),COLUMNS('All Items'!Query)+2)

Resources