I have a single column table of customer account numbers and a main table containing 400,000 records pulling from an access database. I want to remove all records from the table where the customer account number can be found in the single column table.
The merge query capability in power query allows me to return only the records where there is a match on the customer list (in addition to a variety of other variations on this theme) but I would like to know whether there is a way to invert this so that I return all records where the customer number does not appear in this list.
I have achieved this already by using the List.Contains function and adding a custom column to identify the rows to exclude and then filtering them out, but I think this is severely impacting the performance of my workbook. Refreshing the table that initially has 400,000 rows prior to this series of transformations takes a very long time, and all queries that depend on this table then also take a long time to refresh.
Thank you
If you do a Left Anti Join of your table with a single column, this will give you your table filtered to only have the rows which do not match to the single column.
Related
I have a Databricks delta table of financial transactions that is essentially a running log of all changes that ever took place on each record. Each record is uniquely identified by 3 keys. So given that uniqueness, each record can have multiple instances in this table. Each representing a historical entry of a change(across one or more columns of that record) Now if I wanted to find out cases where a specific column value changed I can easily achieve that by doing something like this -->
SELECT t1.Key1, t1.Key2, t1.Key3, t1.Col12 as "Before", t2.Col12 as "After"
from table1 t1 inner join table t2 on t1.Key1= t2.Key1 and t1.Key2 = t2.Key2
and t1.Key3 = t2.Key3 where t1.Col12 != t2.Col12
However, these tables have a large amount of columns. What I'm trying to achieve is a way to identify any columns that changed in a self-join like this. Essentially a list of all columns that changed. I don't care about the actual value that changed. Just a list of column names that changed across all records. Doesn't even have to be per row. But the 3 keys will always be excluded, since they uniquely define a record.
Essentially I'm trying to find any columns that are susceptible to change. So that I can focus on them dedicatedly for some other purpose.
Any suggestions would be really appreciated.
Databricks has change data feed (CDF / CDC) functionality that can simplify these type of use cases. https://docs.databricks.com/delta/delta-change-data-feed.html
I have two tables:
Table: One Row per Order with the Status (Online / Offline)
Table: Multiple Rows per Order
Now I would like to reduce the number of record/ rows in the second table based on the status (Offline) from Table 1.
Is there any alternative to a right join? The first table is filtered on Status 'Offline'
We are talking about several millions of rows which takes some time to Join.
Any thoughts on this from your sight?
Some thoughts:
Create a relationship between these two tables and filter to "Offline".
You could create a join (Merge queries) in Power query and only select the On/Off State column to append. Then the import needs more time, but you're getting a flat dataset in PowerBI
Create a new column in PowerBI with DAX and use LOOKUPVALUE
Without seeing the data I think I would try the first one. If it's too slow, the I think the only way is the second point. Even it takes some more time for importing.
The third one might be the slowest.
I am working with Excel 2010, Power Query, and PowerPivot.
I have a query named Database that consists of 60+ merged tables containing a total of 2m+ rows. I also have a separate query that consists of two columns PrimaryKey3 and Members (a count of members per month). The entries in PrimaryKey3 are unique, consisting of ID-MMM-YY.
Both queries have PrimaryKey3 in common, however in Database there can be multiple rows with the same PrimaryKey3.
In order to match a member amount to each row in Database, I tried a Left Outer join. There were no errors, but when I try to upload to PowerPivot it says there are only 169K rows. I then tried Full Outer join and Inner Join, and received an error "could not convert value to number," coming from a column already formatted as a text in Database. This column contains numbers and numbers proceeding with a letter: 1234, A234. Every non-blank row has a PrimaryKey3. Why is it trying to reformat my columns/ how do I get around that?
Should I be using a different type of join, or is there another way besides merging to do this?
Hope this makes sense, thank you for any help in advance!
I uploaded both queries to PowerPivot, and created a relationship through PrimaryKey3. I then created a new column in Database with =Related(Enrollment[Members]).
Due to performance issues I need to remove a few distinct counts on my DAX. However, I have a particular scenario and I can't figure out how to do it.
As example, let's say one or more restaurants can be hired at one or more feasts and prepare one or more menus (see data below).
I want a PowerPivot table that shows in how many feasts each restaurant was present (see table below). I achieved this by using distinctcount.
Why not precalculating this on Power Query? The real data I have is a bit more complex (more ID columns) and in order to be able to pivot the data I would have to calculate thousands of possible combinations.
I tried adding to my model a Feast dimensional table (on the example this would only be 1 column of 2 rows). I was hoping to use that relationship to be able to make a straight count, but I haven't been able to come up with the right DAX to do so.
You could use COUNTROWS() combined with VALUES().
Specifically, COUNTROWS() will give you the count of rows in a table. That means COUNTROWS is expecting a table is input. Here's the magic part: VALUES() will return a table as results, and the table it returns are the distinct values in the table/column that you provide as the argument for VALUES().
I'm not sure if I'm explaining it well, so for the sample data you provided, the measure would look like this (assuming the table is named Table1):
Unique Feasts:=COUNTROWS(VALUES('Table1'[Feast Id]))
You can then create a pivot table from Powerpivot, and drag Restaurant Id into Rows, and drag the measure above into Values. Same result as DISTINCTCOUNT, but with less performance overhead (I think).
I have a Pivot Table structure as follows:
ROWS:
+-State
+---Customer
+-----Brand
Columns:
+-Cost
I would like to have another column that contains the number of Customers in each state. The issue being that my data contains every order that the customers had placed, so when I try to get the count of Customers it is returning every instance of said customer in the column. Another issue is that my data is 40,000 rows, so I want to try and avoid having to edit the raw data.
I can easily do this with brute force, but I was wondering if there is anyway to do this with standard pivot tables and no add-ons. The pivot table already does a nice job of consolidating the unique values for customers, now I just need a count of those unique values.