Working in MS Access 2010 and expecting to receive 1,000s of changes in Excel format that I need to import into a personnel database. I've been tasked with "automating" the update process but could really use some help.
The primary table has 12 fields that could each change for each change form submitted. We have designed a macro to upload the Excel files but some of the fields on the change form will be blank, resulting in incomplete employee records (e.g. original employee record has all 12 records filled in, but change record only has 1).
Is it possible to write a query or macro to fill in the most recent employee record's empty or NULL values with the non-NULL values from the previous entries?
If I understand correctly, you want to retain the value in the 'primary' table if the value in the 'change' table is null. In that case the following should work
UPDATE <primaryTable> INNER JOIN <changeTable> ON <primaryTable>.<keyField> = <changeTable>.<keyField> SET <primaryTable>.<Field1> = nz(<changeTable>.<Field1>,<primaryTable>.<Field1>), <repeat for each field to update>
Just be sure you are dealing with nulls and not empty strings, which is common in Excel imports. In that case you need to either change the empty strings to nulls or use an IIF statement instead of the nz function.
Related
Please see the code below for a Power BI table in DAX:
TABLE1 =
VAR ParticipantOneParticipantId =
SELECTEDVALUE(
ParticipantOneDetails[ParticipantId]
)
RETURN
FILTER(
ParticipantOneMeetings,
ParticipantOneMeetings[ParticipantId] = ParticipantOneParticipantId
)
I am fetching a value for ParticipantId from a sliced table called ParticipantOneDetails and setting ParticipantOneParticipantId to it.
In the next step I am trying to filter the table ParticipantOneMeetings based on its column ParticipantId comparing it against ParticipantOneParticipantId.
The problem is that the resulting table is coming out empty even though I know that ParticipantOneParticipantId must have a value and the ParticipantOneMeetings table also has values. I verified by comparing against a hard-coded string.
Can you please point out what I am doing wrong? Is comparing this way not legal?
The problem lies in the process you are trying. A calculated/custom tables and columns are static. They always refresh when the data set is refreshed. They do not interact dynamically with the slicer value. So it is impossible to get data from a slicer dynamically for a Custom Table generation.
Now, your requirement of creating a new table based on slicer value is not completely clear to me. As what you are trying, is a simple filtered output of your table "ParticipantOneMeetings" after applying the Slicer. If you have relation between your 2 tables using column "ParticipantId", change in Slicer will automatically filter out values in ParticipantOneMeetings table. Why you wants to hold this same filtered values in a new Custom table is really a mater here to know for finding appropriate solution for you.
Turns out I needed to add the following measure to the table output:
MeetingsAttendedByBothParticipants =
countrows(
INTERSECT(
VALUES(ParticipantOneMeetings[Name]),
VALUES(ParticipantTwoMeetings[Name])
)
)
The above provides an intersection on output of two sliced meeting tables. This results in a list of meetings that both persons attend.
What is the default behavior of adding a date, time, or datetime into an Excel pivot row/column? I have seen it sometimes add it as the "raw value", sometimes it will add it as a Year > Query > Value, and other times (?) perhaps in between. For example:
When does Excel add it without aggregating it, and when does Excel aggregate it? Does it have to do with value cardinality, date range, or something else?
First, every entry in the column has to be a date/time or you won't be able to group them. In that case, obviously, the default would be not grouped.
Assuming everything is groupable, the default is no grouping. Each date will show individually.
The exception is if a pivot cache already exists. In that case it will group based on what the pivot cache says - the last way that field was grouped. This happens when you have more than one pivot table on the same data. The first pivot table creates the cache and all subsequent pivot tables use that existing cache.
In a new workbook (2010), I add a date field to the Row Labels and they are initially ungrouped by default.
I group them by month
Now I go back to the original data and make a new pivot table. I add the date field to the Column Labels.
Because it uses the same cache, it automatically has them grouped the same way. Finally, I go back to the source data and replace one of the dates with a string. If I create another pivot table, it will look like the others. But when I refresh it ungroups them because I have a non-date in there.
And if I try to Group now, it says "Cannot group that selection"
That's why it works the way it does - shared pivot cache. There are ways you can give each pivot table it's own cache but that uses more memory. However, if you want to group the same data differently, that's what you have to do.
I'm looking at a table (Table1) inside an Excel book saved on my OneDrive for Business account. I then want to get the maximum value in the CREATEDDATE column from this table.
I want to avoid pulling down the whole table with the API, so I'm trying to filter the results of my query to only the CREATEDDATE column. However, the column results from the table are not being filtered to the one column and I'm not getting an error to help troubleshoot why. All I get is an HTTP 200 response and the full unfiltered table results.
Is it possible to filter the columns retrieved from the API by the column name? The documentation made me think so.
I've confirmed that /columns?$select=name works correctly and returns just the name field, so I know that it recognizes this as an entity. $filter and $orderby do nothing when referencing any of the entities from the response (name, id, index, values). I know that I can limit columns by position, but I'd rather explicitly reference the column by name in case the order changes.
I'm using this query:
/v1.0/me/drive/items/{ID}/workbook/tables/Table1/columns?$filter=name eq 'CREATEDDATE'`
You don't need to $filter here, just pull it by the name directly. The prototypes from the Get TableColumn documentation are:
GET /workbook/tables/{id|name}/columns/{id|name}
GET /workbook/worksheets/{id|name}/tables/{id|name}/columns/{id|name}
So in your case, you should be able to simply call call:
/v1.0/me/drive/items/{ID}//workbook/tables/Table1/columns/CREATEDDATE
I have a VBA script that generates a query string for a SAP HANA ODBC Connection in Excel. The query is determined by user inputs and can vary greatly in length. The query itself uses many versions of a similar query appended to one another using UNION ALL syntax.
The script sometimes throws a runtime error when trying to refresh. From my research, it has become clear that the reason for this is that the CommandText string exceeds a maximum allowed length of 32,767 (https://ask.sqlservercentral.com/questions/50819/too-long-sql-in-excel-vba.html).
I wondered whether there is a workaround for this, other than using a stored procedure (I am not against this if there is a way to create a stored procedure at runtime then execute it, but I cannot use a predefined stored procedure as my query is always different hence the need for VBA to create it)
Some more info about the dynamic query in VBA:
Column names, as well as parameters, are created dynamically and can be different every time
The query uses groups of lists of product numbers to generate an IN statement for each product group, then sums the sales for those products under the name of the group. These are then all UNION'd together to create one table with grouped records
Example of user input:
Example of resulting query:
WITH SOME_CTE (SOME_FIELDS) AS
(SELECT SOME_STUFF
FROM SOME_TABLE
WHERE SOME_STUFF_IS_GOING_ON)
SELECT GEND "Gender", 'Attribute 1' "Attribute", SUM(UNITS) "Units", SUM(VAL) "Value", SUM(MARGIN) "Margin"
FROM SOME_CTE
WHERE PRODUCT IN ('12345', '23456', '34567', '45678')
GROUP BY GEND
UNION ALL
SELECT GEND, 'Attribute 2' ATTR_NAME, SUM(UNITS), SUM(VAL), SUM(MARGIN)
FROM SOME_CTE
WHERE PRODUCT IN ('01234', '02345', '03456', '03567')
GROUP BY GEND
ORDER BY "Gender", "Attribute"
...and so on.
As you can see, with 2 attribute groups containing 4 products each there is no problem, but when we get to about 30 with several hundred each, it could be too long.
Note: I have tried things like shortening field references in the repeated parts of the query string to 1 character etc. which helps but does not solve the problem.
Any help would be greatly appreciated.
One workaround is to send multiple queries. Since you are using union all, you could execute every time single select statement, i.e.
create table in (for example) master database (don't create temporary tables! as they will be dropped after every query) - but before that, make sure you create new table, so delete old one if exists (also drop the table after you are done with it). Now every single select statement you'll change to insert statement, which will insert records to your so-called temporary table.
This way, you'll avoid lengthy queries, you'll just send single insert .. into.. select statements.
At the end, to get all results, you just need simple select query. After getting this data, you should drop that table, as it's no longer needed.
I have a shapefile in Spotfire and in the tableview of it I have a column displaying DenseRank. For example, if limit data by expression from the full 100 rows in the table to just 30, the DenseRank does not change. How can I perform this task?
Thanks,
Chris
Tableview does not allow dynamic calculations, unless you have a Document Property in the expression, The calculated column expression executes whenever Document Property value chane (or Calculations refreshed), for your scenario I think instead of using filter create a property control with Fixed values (10,20,30...100) or Values from a column (the one you are using to filter data). and use Document Property linked to the Property Control in your Calculated Column Expression .....
I found a workaround to dynamically rank data based on filtering or marking. If you create a data function as simple as "tableout <- tablein" then you can pass the original filtered and/or marked table to a new table. From there, insert calculated column on the new table and it will recalculate each time.