I have 3 columns: Owner, General Manager and PR. For Owner the value could be False or Owner, For General Manager it could be False and General Manager and so on. The fourth column is called Relationship Key and should combine all of the values in each column excluding False e.g. False, General Manager and PR = General Manager, PR.
Try
=SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(CONCATENATE(A2,", ",B2,", ",C2),"FALSE, ",""),", FALSE",""),"FALSE","")
Related
I have a table with many columns. Three of these columns are:
Package Name (text)
Units Required (Int.64)
Assessment (Int.64)
What I am trying to do is to find the 'Minimum' "Package Name" first by selecting the smallest number of "Units Required", then because sometimes there are several instances where the number of required units will be the same, the row with the lowest "Assessment".
I am exploring the Table.Group() approach but I am not getting anywhere with my understanding of it. I am doing this in Power Query in Excel 365.
Psuedo Code would be something like:
Table.Group("Previous Step Name",{"Package Name"},{MIN("Units Required"),MIN("Assessment")})
As an aside - is it possible to use a single Table.Group and group at two levels? such as "Package Name" and "Column X" so that the result would be a: for each "Package Name" then for each "Column X" in each "Package Name" (nested as it were).
Thankyou in advance for taking a look at this.
Any help greatly appreciated.
Cheers
The Frog
I think you have to do it step by step.
Data
Queries
Load_Data
Load data from Excel table
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content]
in
Source
Min_Unit
Identify min unit by grouping with empty "group by" field.
let
Source = Load_Data,
Group = Table.Group(Source, {}, {{"Min_Unit", each List.Min([Units Required]), type number}})
in
Group
Min_Unit_And_Assessment
Use inner join to filter original data for entries which equal min_unit. Next, group by "units required" to get the min_assessment.
let
Source = Table.NestedJoin(Load_Data, {"Units Required"}, Min_Unit, {"Min_Unit"}, "Min_Unit", JoinKind.Inner),
Group = Table.Group(Source , {"Units Required"}, {{"Min_Assessment", each List.Min([Assessment]), type nullable number}})
in
Group
Result
Inner join to filter original data for the combination of min_unit and min_assessment.
let
Source = Table.NestedJoin(Load_Data, {"Units Required", "Assessment"}, Min_Unit_And_Assessment, {"Units Required", "Min_Assessment"}, "Min_Unit_And_Assessment", JoinKind.Inner),
RemoveUnnecessaryColumns = Table.RemoveColumns(Source,{"Min_Unit_And_Assessment"})
in
RemoveUnnecessaryColumns
Result
Qualia, thankyou for pointing me in the right direction.
The way that I solved this was really simple in the end!
Step 1: Sort the rows based on the grouping criteria (package name, system class) in that order
Step 2: Add an Index Column so each row has a unique ID to work with
Step 3: Group the table based on the same fields (package name, system class) and 'aggregate' on the lowest Index Number (MIN)
Step 4: Perform a 'Merge Queries' with a Left Outer Join using the Index Number as the matching field between your current 'step' and the step from earlier in the processing where the Index was added - you can then have the rows matched and only the rows needed will be matched since the others are now gone due to the MIN aggregation from earlier. Here is my example:
Table.NestedJoin(#"Grouped Rows", {"Winner"}, #"Added Index", {"Index"}, "Lookup Data", JoinKind.LeftOuter)
- Grouped Rows was the grouping step (Step 3)
- Winner is the name of the Index that had the minimum value
- Added Index was the last step before grouping that still had all the columns (Step 2)
- Index is the column that was added after the sort to uniquely number each row
Step 5: Expand the table and select the columns of data that you want to hang onto
Treating it a bit like a database was a good approach and I appreciate the suggestion you put together for me. Hopefully this will allow others to solve some of their problems too.
Cheers and many thanks
The Frog
We use UOM conversions at this client. We stock in Eaches and sell in Cases. The problem we are having with the Pick ticket is that both the quantity to be picked and the UOM being picked are the stocking unit and not the selling unit.
e.g. The customer orders 73 cases (12 ea per case). The pick ticket prints 876 each. This requires the warehouse person to look up each item determine if there is a Selling UOM and ratio and to then manually convert 876 eaches to 73 cases.
Obviously, the pick ticket should print 73 cases. But I cannot find a way to do this. The items are lotted and an order of 73 case might have 50 cases of Lot A and 23 cases of Lot B. This is represented in the SOShipLineSplit table. The quantities and UOM in this table are based on Stocking units.
Ideally, I could join the INUnits table to both the SOSHipLine and SOShipLineSPlit table. See Below.
Select case when isnull(U.UnitRate,0) = 0 then S.Qty else S.Qty/U.Unitrate end as ShipQty
,case when isnull(U.UnitRate,0) = 0 then s.uom else U.FromUnit end as UOM
from SOShipLineSplit S
inner join SOShipLine SL
ON S.CompanyID = SL.CompanyID and s.ShipmentNbr = SL.ShipmentNbr and S.LineNbr = SL.LineNbr and S.InventoryID = SL.InventoryID
Left Outer Join INUnit U
On S.CompanyID = U.CompanyID and S.InventoryID = U.InventoryID and s.UOm = U.ToUnit and SL.UOM = U.FromUnit
where S.ShipmentNbr = '000161' and S.CompanyId = 4
The problem is the Acumatica Report writer does not support a join with multiple tables.
Left Outer Join INUnit U
On S.CompanyID = U.CompanyID and S.InventoryID = U.InventoryID and s.UOm = U.ToUnit and SL.UOM = U.FromUnit
I believe I must be missing something. This cannot be the only client using Acumatica who utilizes Selling Units of Measure. Is there another table I could use that would contain the quantities and UOM already converted for this order to Selling Units?
Or another solution?
Thanks in advance.
pat
EDIT:
If the goal is to display accurate quantities before/after conversion then INUnit DAC can't be used. It doesn't store historical data, you can change INUnit values after an order has been finalized so re-using it to compute quantities will not yield accurate results.
For that scenario you would need to use the historical data fields with Base prefixes like ShippedQuantity/BaseShippedQuantity. If you require to store more historical data you need to add a custom field to hold these values and update them when shipment is created/modified.
The main issue appears to be a logical error in the requirement:
The problem is that the INUnit table has to be joined to BOTH the
SOShipLine and the SOShipLineSplit tables.
INUnit DAC has a single parent, not 2 so you need to change your requirement to reflect that constraint.
If SOShipLine and SOShipLineSplit values differ then you'll never get any record.
If they are identical then there's no need to join on both since they have the same value.
I suggest to add 2 joins, one for SOShipLine and another for SOShipLineSplit. In the report you can choose which one to display (1st, 2nd or both).
You can also add visibility conditions or IIF formula condition in the report if you want to handle null values error check for display purpose.
Use the Child Alias property in schema builder to join the same table 2 times without name conflicts. In the report formulas (to display field or in formula conditions) use the Child Alias table name too.
Example:
I have a compound primarykey (uid, stale) which when I try to edit the stale boolean value gets duplicated (as the compound key can support both these combinations)
Eg:
1) uid-val, TRUE
when stale column updated to FALSE results in 2 rows of data one with older TRUE and the new FALSE
1) uid-val, TRUE
2) uid-val, FALSE
Is there anyway to overcome this rather than with a delete before inserting the updated values?
No - change of the components of the primary key will lead to adding the new row...
Why not convert that column into "normal" column from partition key/clustering column? I think that you need adjust data model
I have a table:
Old table
Level 1 is top level. CC is the smallest unit inside our company.
What I want to do is convert this table into a flat table with additional column like level 1 / level 2 / Level 3, which show parent department of each node,
e.g. 100111 |CC |3 |IS// |IS/ |IS.
New Table
Using Excel I can do it easily by using some conditional formula and copy the cell above if current cell is CC.
My process is like this: SAP Application (export)-> .xls file (without Level and Parent Columns) -> creating new column for level and parent node with power query -> make new column (level 1 - 6) like example in the new table.
For Column Level 1 i use this formula:
If(B2=1;A2;D1)
and i fill it down for the rest. In my data, the first row is always level 1.
For Level 2:
=IF(B2=2(//because is Level 2),A2,IF(B2<2,"",E1))
And i repeat the same formula for other Column.
Can someone suggest me a solution for this problem?
I think the Power Query equivalent of your first formula would be to Add a Column with this formula:
if [Level] = 1 then [Department] else null
I would follow that with a "Fill / Down" step (from the Transform ribbon).
The subsequent formulas would look similar, e.g. for Level 2
if [Level] = 2 then [Department] else null
Follow each with a "Fill / Down" step and you should be done.
I got data from two tables.
Customers (containing customer ID and the total value of orders/funding
Orders (Containing customer ID and each order)
I created a Power Query, then chose the option to "Merge Queries as New". Selected the matching Columns (Customer ID) and chose the option:Left Outer (All from the first and, matching from second => All from the customer table, matching from the order table). Then I expanded the last column of the Query to include what I wanted from the Order table resulting in the table below on the left. The one on the right is what I'm after. The problem is that funding amounts are already totals per customer. I don't need the value of each order broken down. I still need the orders displayed but I don't need their values (just the total per customer). Is it possible to do it like the one below on the right? Otherwise, the grand total is way off.
I think what you're trying to do is join with only the first instance of each value in your Customer column. There doesn't appear to be any feature or GUI element that allows you to do that (I had a look at the reference documentation for Power Query M, maybe I missed something).
To replicate your data, I'm starting off with some tables (left table is namedCustomers, right table is namedOrders):
I then use the M code below (the first few lines are just to get my tables from the sheet):
let
customers = Excel.CurrentWorkbook(){[Name = "Customers"]}[Content],
orders = Excel.CurrentWorkbook(){[Name = "Orders"]}[Content],
merged = Table.NestedJoin(orders, {"CUSTOMER"}, customers, {"CUSTOMER"}, "merged", JoinKind.LeftOuter),
indexColumn = Table.AddIndexColumn(merged, "Temporary", 0, 1),
indexes =
let
uniqueCustomers = Table.Distinct(Table.SelectColumns(indexColumn, {"CUSTOMER"})), // Want to keep as table
listOfRecords = Table.ToRecords(uniqueCustomers),
firstOccurenceIndexes = List.Accumulate(listOfRecords, {}, (listState, currentItem) =>
List.Combine({listState, {Table.PositionOf(indexColumn, currentItem, Occurrence.First, "CUSTOMER")}})
)
in
firstOccurenceIndexes,
expandSelectively =
let
toBoolean = Table.TransformColumns(indexColumn, {{"Temporary", each List.Contains(indexes, _), type logical}}),
tableOrNull = Table.AddColumn(toBoolean, "toExpand", each if [Temporary] then [merged] else null),
dropRedundantColumns = Table.RemoveColumns(tableOrNull, {"merged", "Temporary"}),
expand = Table.ExpandTableColumn(dropRedundantColumns, "toExpand", {"FUNDING"})
in
expand
in
expandSelectively
If your table names and column names match mine (including case sensitivity), then you might just be able to copy-paste all of the M code above into the Advanced Editor and have it work for you. Otherwise, you may need to tweak as necessary.
This is what I get when I load the query to the worksheet.
There might be better (more efficient) ways of doing this, but this is what I have for now.
If you're not using the order ID column, then I would suggest doing a Group By on the OrderTable before merging in the funding so that you'd end up with a table like this instead:
Region Customer OrderCount Funding
South A 3 2394
South B 2 4323
South C 1 1234
South D 2 3423
This way you don't have mixed levels of granularity that cause problems like you are seeing with the totals.