The Excel macro I've created is checking the value of some tables for data coherence before running the actual code.
At first, the computation time was not perceptible, but my tables are getting bigger and bigger...
What I'm wanting to do, is checking the data coherence only if their contents were modified since last check. And I though of hash.
But, I was wondering if it's possible to create quickly a hash of an entire table? If I start to create a hash of each cells, I'm afraid the computation time will be similar.
Thanks in advance for your help!
What you can do is after every checking of the data coherence make a copy of that table into a hidden sheet (to freeze that state of data).
Next time you run your code you just compare your data against the hidden copy to check which data changed. Then you only need to check coherence of the changed data.
Comparings like this can be done quickly by reading both (data and hidden copy) into arrays and compare the arrays.
You can read a full range of data into an array with one singele line of code
Dim DataArray() As Variant
DataArray = ThisWorkbook.Worksheets("Data").Range("A1:C10").Value
DataArray is now an array containg the data of range A1:C10 and you can access it using:
DataArray(row, column)
Related
How can you solve this problem if the first column contains data that needs to be joined by the common data of the second column.
Those in this case it is necessary to register somehow a condition.
There can be a huge amount of data, so actions with copying / transferring data will not work.
I will be grateful if you help
im trying to update a worksheet, by using queries. Now i have a problem, cause the imported Data keeps getting turned into a table , although the resource Data is not a table.
Basically i want the columns A:E in worksheet(x), to get updated with columns A:E from worksheet(xyz) and keep the formatting from worksheet(xyz).
Power Query always returns output in the form of table as it's easier to address the respective data positions via vectors. assume you have 2 queries, then it's much easier to merge/append since they are treated as relational data tables.
you can always convert the output table back to range but the connection to query will be lost.
Is it possible to somehow overwrite existing counters-fields when COPY FROM data (from CSV),
or completely delete rows from the database?
When I COPY FROM data to existing rows, the counters are summarized.
I can’t completely DELETE these rows as well:
although it seems that the rows are deleted, when you re-COPY FROM the data from CSV,
the counters-fields continue to increase.
You can't set counters to the specific value - for them the only supported operation is either increase, or decrease. To set them to specific value you need either to decrease it to its current value, and then increase to desired value, but this will require that you read value. Or you need to delete corresponding cells (or whole row), and perform increase operation using desired number.
The second approach could be implemented easier, but will require that you first generate a file with CQL DELETE commands based on the content of your CSV file, and then use COPY FROM - if nobody increased values since deletion, then counters will get correct values.
I currently have a large set of data in excel (600,000 lines long).
What I have is the generation of a certain plant in all different states, what I want to do is have the total generation of a state. What is a quick way of aggregating the individual plant data into state data.
I have provided a very small example of what I have here:
The main challenge is that my table with the AA1=QLD is a separate document and not as simple as in my example, in reality there is over 50 different plants for each state and no naming patterns for the plants.
The only thing I think that would solve this for me is writing an iff statement about 400 lines long, which of course, is not feasible.
Any help would be appreciated.
Thanks
First transfer your document to a table:
Then add a helper column that lines up the correct state with the correct plant in the table. A simple VLOOKUP will do this:
=VLOOKUP(A2,I:J,2,FALSE)
Then insert a pivot table using the data from the existing table and new helper column. Put State in Rows and Generation in Values
I have a code that pulls data from an online JSON api. The data can be stored in variable array or in cells, I haven't decided yet but probably cells to save memory. (I have 24GB of memory, should I go for variables in order to have faster speed?)
The JSON as of now is <7MB. I don't know how big the parsed object would be (Using the VBA-JSON class). Each row of the data is leaded by a non-continuous ID#. Currently have 20K-30K rows.
Anyway, I have a few other sheets that would want a portion of the parsed data. They would be identified by ID. I know there's autofilter but I wonder if there's other faster options. Maybe get the row number by ID (MATCH?) and pull the other columns on that row?
As said other parts of the code can change like storing the parsed data in arrays or cells.