i have a QTableView. I am Querying data from database using QtSql.QSqlQuery.
SQL = 'SELECT * FROM table1'
Query = QtSql.QSqlQuery(database)
Query.prepare(SQL)
Query.exec_()
model = QtSql.QSqlTableModel()
model.setTable('Table')
model.setQuery(Query)
proxy = QtGui.QSortFilterProxyModel()
proxy.setSourceModel(model)
QTableView.setModel(proxy)
Everything is working file the Query result is show in the QTableView.
My Issues is when i change the SQL statement which results Query to return 0 records, I need to clear the data and the cells in the QTableView
I tried using QTableView.clear() its clearing the data in the cells leaving behind empty rows and columns. How can i clear QTAbleView completly
In c++ there is a reset function for QAbstractItemView.
you could say
yourTableView.reset();
My research on clearing data worked.
I Used proxy.deleteLater()
Hope someone could get benefited who comes across the same situation
What I prefer using in this situation is a bit of a workaround, due to the limitations of QTableView
In my own personal program, I decided instead to hide the rows and columns of the QTableView, making it to appear as though it is set to default and blank. The reason for this is, if you delete the widget, you need to replace it and this also resets all of the object's properties. This could theoretically force you to re-assign all the properties of your QTableView every time you need it to just display as blank or cleared. Instead of deleting the widget, this proposed option can be done with just two simple loops and when you refresh your table and want to display the values again, just two simple loops again, gets the job done.
There are multiple ways to populate a QTableView, but whichever you use, you just need to iterate through the rows and columns of your table. Below is the method I use, based on a class that inherits from QtCore.QAbstractTableModel so that when I call dataFrame it returns a pandas dataframe of the QTableView data.
If you need to hide the table:
for _row in range(len(yourQTableView.model().dataFrame.index)):
yourQTableView.hideRow(_row)
for _col, col_name in enumerate(yourQTableView.model().dataFrame.columns):
yourQTableView.hideColumn(_col)
If you need to show the table:
for _row in range(len(yourQTableView.model().dataFrame.index)):
yourQTableView.showRow(_row)
for _col, col_name in enumerate(yourQTableView.model().dataFrame.columns):
yourQTableView.showColumn(_col)
To me, this is the easiest method and gets me the result I want. I can't think of a reason why this method wouldn't be preferred over deleting the widget altogether.
self.table_view.model().setRowCount(0)
Related
I am creating a table using Tabulator, which seems great and very powerful.
I want a way to save relevant data of the table so it can e recreated on the fly.
Currently, I think there are a few things I need...
The row data - I get this using table.getData();
The columns - I get this using table.getColumnDefinitions();
The row data seems perfect I can store that and use it. However, the column information I am saving doesnt appear to have the size of the columns if I have resized them?
Is there a way of getting ALL the relevant column info, so I can save and recreate it exactly?
Alternatively, if there's a single 1 function that saves everything (row data, columns (including order, size etc)) in one go as a JSON or something that may be handy
So you have a few options here.
Config Persistence
If you simply want the table to be the same way it was the last time the user used it on that computer, you could look at using the Peristent Configuration module. This will store a copy of the table column configuration on the browsers local storage so that next time they load the page it will be laid out the same.
Column Layout
If you want to store it externally then you are correct,
the column width is not updated in the definition after a user changes it.
If you want to get the current layout of the columns then you can use the getColumnLayout function to retrieve the current layout of columns:
var columnLayout = table.getColumnLayout();
Though this will only contain the key layout characteristics and not the full definition, you would need to merge them if you wanted to store them in one place.
More details on this method can be found in the Manual Column Layout Documentation
Lets say I got some external .csv files which I got updated and I just need to hit the refresh button in Power Query to make some magic - that works fine, BUT, there some columns which are information about some parts, and I need to lookup values for them in another .csv file. What I did here is, I didnt convert all 4 Columns in a Table, but I separated them, each column has another name (table name) because I had some issues with refreshing from Power Query, and seemed easier to do calculation first and then convert to table.. maybe that was not smart tough??
My question is and issue actually, I am not getting new rows with new data beneath my "tables" I must drag it down to populate. Why that occurred?
These are functions I used from starting Column:
=INDEX(Matrix[[#All];[_]];ROW())
Then others are just lookup ones depending which info I am looking for:
=INDEX(variantendb[Vartext];MATCH(C2;variantendb[Variante];0))
And last column and calculation is concatinating to have Info name and Code together:
='0528 - info'!$D2 & " "& "("&'0528 - info'!$C2&")"
And of all of them I made in 5x Tables SEPARATELY, not as one table. Maybe I should do with one table, and then do the calculations and then it will be dynamically updated?
It is automatically updated only when I add new data somewhere in the middle of .csv but not when is in a last row, then it is not expanding!
Well, I solved it. How? Using Power Query at its best, I played around and actually gave me complete another approach to my problem, using Merge function and a bit of formatting. Works flawlessly, with minimum functions afterwards. What is important it refreshes in a milisecond - PROPERLY!!!
I am amazed by PQ and its functionality.
I am using Power Query Editor to create a working file, using multiple tables from several sources.
After I combine these and make my working file, I am using it to make some work on columns I add later on the working file.
I have noticed that the values I enter in the working file are not bound to the main key, lets assume the first column, but they are independent values in a column.
The result is that if one table changes, for example one line is deleted or I change the sorting of the Query, my working file is wrong, since the data changed but the added columns remain as they were.
Is there a way to have the added columns to be bound with a value, as it is for example with VLOOKUP?
How can I make a file that will update from different sourcesbut stil I can work on it without the risk of misplacing the work I do.
I hope I am clear.
Thank you in advance!
This is fairly simple if each line in your table is unique (which in your example you say the first column can serve as a key). Setup your working columns on the table and then load the table into PQ (as a connection only). Then go to your original query that is combining your data and add a merge at the end where you merge against the table you just loaded into PQ and match on your key. Then expand only your working columns from the merge.
This way whenever you refresh your table, it will match lines against it's existing output in your work before updating, so data in your work columns will be maintained. However note this is only going to retain values, not any formulas you may be using in your work columns.
I am working on an application where there is a desire to automate data entry as much as possible. The wish is to add a button to such entry forms for choosing an excel file to import. I have done this for one interface, and now I'm working on others. I'm looking for the best way to prevent duplicates are imported into a table. For the one I am working on now, it is a simple 2 column import. One method I have used before is to import the spreadsheet into a temp table. Then I can utilize a query to insert where <> . I just wonder if this is the best method to use.
Any thoughts?
Thanks!
Something like this should work. I can tailor it more if you list some more details of your projects.
From "External Data" on the ribbon, link to the excel file.
Then write the following query:
INSERT INTO table1
(
field1,
field2
)
SELECT
a.field1,
a.field2
FROM tableExcel AS a
LEFT JOIN table1 AS b ON a.field1 = b.field1
WHERE (((a.field1) Is Null));
Then just attach a macro to the button running the query above.
I ended up finding the solution that will work best. I can put an index on the 2 fields that are getting imported from the spreadsheet, into the table. Then before I issue the transferspreadsheet command, I will set warnings false, and set them true once it is done. This way, the user won't get errors for the indexes doing their job of rejecting duplicates.
Anyone see any problem with that solution? The only bummer is that if I imported to a temp table, I could get a count of items first and verify the count after insert, so I could report some info to the user in the process. Other than that, this means I don't need a temp table, and I can go directly into the goal table without worry about importing dupes.
I have an excel file and I want to import the excel file basing on the existing database table using entity framework. Right now I firstly convert the excel sheet to a DataTable and have a loop to loop through each row of the DataTable. Each row has an id field and if the id exists in the database table I need to update it otherwise I need to insert this row to the database table. I want to use entity framework to wrap my loop into one transaction for roll back purpose in case of error. But I run into a scenario of two rows with the same id but different values. The first row is checked and added my entity collection, but the second row might be mistakenly updated the firstly added row because the firstly is not actually added due to the delayed context.SaveChanges() called after the loop. How can I update the previously added row in the entity collection without repeatedly calling context.SaveChanges() inside my loop? Thanks.
I don't think I have done it over the past decade or so, but I have used Microsoft Word's Mail Merge to create the SQL statement that I needed (SELECT, INSERT and UPDATE) for each line in an Excel sheet. Once I got the long SQL statement in text I simply copy-paste it into the console and the statement was executed and the job was done. I am confident that there are better ways of doing this but it worked at the time with limited knowledge but a need. This answer is probably in the category "don't try this at work, but it is fine to do it at home if it does the job".