PowerApps Flow: Trigger condition not working. The flow keeps firing - sharepoint

I have tables with monthly data, and I need to calculate row-level summations. I also need a column-level summation for all columns.
My columns are:
Project ID | Name | Budget Year | Total Amount | Sum Total | Jan | Feb | ... | Dec
'Total Amount' is a calculated column, but this doesn't provide me with the column-level total, so I have a 'Sum Total' currency column that I populate using a flow that gets triggered every time something gets created or modified.
To ensure I don't run into an infinite loop, I have the below in the Trigger Condition:
#not(equals(triggerBody()?['Total_x0020_Amount'], triggerBody()?['SumTotal']))
Alternatively, I also tried:
#not(equals(triggerOutputs()?['body/Total_x0020_Amount'], triggerOutputs()?['body/SumTotal']))
Neither of these work and my flow keeps firing. Could someone point me in the right direction?

Another way to approach this is to let the flow run, but check to see if the totals match the expected outcome BEFORE updating the record.
If they match, don't update the record, just skip over it. If it doesn't add up correctly, update the record.
The next time the flow runs for that item, it will hit the first part of the logic and it will know that no update needs to occur.

I managed to fix this by having a Condition block that performs the update only when any of the specific monthly columns get updated. The flow still gets triggered on every edit, but this condition throttles the flow considerably and has been very reliable so far.

Related

Add column that does not refresh in powerquery / excel

I try to combine a power query generated view with a column for user input during a review session.
I would like to prevent this column from changing upon data refresh.
A | B
--------------
Stats | Review
Long | good
Short | bad
A can update. B is unknown at query time and therefore created with empty values. It is populated by the user. I want to update A, but don't remove the values in B.
I tried to add column B in excel instead of power query, after generating the query to an Excel table. It kind of works as the values stay in the column. However, the order is messed up when I refresh.
Ultimately I think this might be possible with power query when generating the table.
Can this command be modified to get the desired behavior on the column?
Table.AddColumn(#"A", "B", each null)
Sorry for the bad example, its kind of hard to show proper excel/powerquery 'code'.

cassandra lastupdated(auto_now), lastaccessed and created(auto_now_add)

Is there a way we can auto update the columns creation and last updated/accessed timestamp?
We can use toTimestamp(now()) function to store the creation time. But do we have a function like writetime(name), which is used to get the last modified time? Is there a similar function for reading the creation and accessed-time?
Is there a way I can get all the three timestamps lastupdated/lastaccessed and created timestamp auto-generated and stored?
Yes, there is a writetime function, but it only operates on non-primary key columns.
aploetz#cqlsh:stackoverflow> SELECT name,description,writetime(description)
FROm bookbyname WHERE name='Patriot Games';
name | writetime(description) | description
---------------+------------------------+------------------------------------------------------------------------------------------------
Patriot Games | 1442340092257821 | Jack Ryan saves England's next king, and becomes the target of an IRA splinter terrorism cell.
Cassandra does not keep track of last accessed/read, or anything like that.
In Cassandra the last write wins, so last updated and created are going to be the same. But if you had a column that you know had changed, and one that you know had not changed, you could get the write times of both, and then you'd have your updated and created times.

Excel Query looking up multiple values for the same name and presenting averages

Apologies if this has been asked before. I would be surprised if it hasn't but I am just not hitting the correct syntax to search and get the answer.
I have a table of raw data for my staff, it contains data on the name of the employee who completed a job and the start and finish times, among other things. I have no unique ID's other than name, and I cant change that as I'm part of a large organisation and I have to make do with the data I'm given.
what I would like to do it present a table (Table 2) that shows the name of the employee and then takes the start/finish times for all of their jobs on table 1 and presents the average time taken across all of their jobs.
I have used Vlookup in the past but I'm not sure it will cut it here. the raw data table contains approx 6000 jobs each month.
On table 1 i work out the time taken for each job with this formula;
=IF(V6>R6,V6-R6,24-R6+V6) (R= started Time) (V= Completed Time) in 24hr clock.
I have gone this route as some jobs are started before midnight and completed afterwards. Although my raw data also contains dates (started/completed) in separate columns so I am open to an experts feedback on this and if there is a better way to work out the total time form start to completion.
I believe the easiest way to tackle this would be with a Pivot Table. Calculate the time taken for each Name and Job combination in Table 1; create a pivot table with the Name in the Row Labels and the Time in the Values -- change the Time Values to be an average instead of a sum:
Alternatively, you could create a unique list of names, perhaps with Data > Remove Duplicates and then use an =AVERAGEIF formula:
Thanks this give me the thread to pull on, I have unique names as its the persons full name, but ill try pivot tables to hopefully make it a little more future proof for other things to be reports on later.

Performance tuning in Cognos Report Studio

Working in Cognos Report Studio 10.2.1. I have two query items. First query item is the base table which results in some million records. Second query item is coming from a different table. I need to LEFT OUTER JOIN the first query item with other. In the third query item post the join, I am filtering on a date column which is in formatYYYYMM to give me records falling under 201406 i.e the current Month and Year. This is the common column in both the table apart from AcctNo which is used to join both the tables. The problem is, when I try to view Tabular datathe report takes forever to run. After waiting patiently for 30 mins, I just have to cancel the report. When I add the same filter criteria to the 1st query item on the date column and then view the third query item, it gives me the output. But in the long run, I have to join multiple tables with this base table and in one of the table the filter criteria needs to give output for two months. I am converting a SAS code to Cognos, In SAS code, there is no filter on the base table and even then the join query takes few seconds to run.
My question is: Is there any way to improve the performance of the query so that it runs and more importantly runs in less time? Pl note: Modelling my query in FM is not an option in this case.
I was able to get this resolved myself after many trial and errors.
What I did is created a copy of 1st Query item, and filtered 1st query item with current month and year and the for the copy of 1st query item added a filter for two months. That way I was able to run my query and get the desired results.
Though this is a rare case scenario, hope it helps someone else.

Get the average monthly value from 2 SP List columns and display in new column

I need to calculate the average value for each month. Currently I have 2 columns "DATE" (date value e.g 01/01/2010) and AccOpen (number value). So for all dates within January I need to return the average value of all numbers contained in the corresponding AccOpen rows for January dates.
Is it possible to use the CALCULATED option and input a FORMULA that will return the average for all itmes within each months period (when adding a column to the list ?
DATE ACCOPEN AVERAGE
01/01/2010 2 2
02/01/2010 2
03/01/2010 2
04/01/2010 2
01/02/2010 2 2
02/02/2010 2
03/02/2010 2
04/02/2010 2
You're not going to be able to do this OOTB without writing event receiver code (or other custom code running in a batch mode).
To get you started
MSDN - How to: Create an Event Handler Feature
Event Handlers : Everything you need to know...
This will need to hook into the list item update and then consolidate your list into a separate summary list with the calculations you need.
The brute force approach would be to run the calculation afresh for every item in the group when an item is inserted/updated.
A smarter approach would be to just update the delta (the difference between the old and the new record) which is easier to do if you store components of the calculation - so in your case
Month - NumRecords - TotalValue
and work out the Average on the fly (as its easy to delta the NumRecords/TotalValue but impossible to apply it directly to the average)
One 3rd party web part which may fit your need is PivotPoint - it allows you to do things like sum/count/avg over groups like Month & Year (disclaimer - I work for the company)
It is not possible to query anything other than the current item when creating a formula field.
The only way to do this is to create custom code either within an event handler for the list or external code that processes items in the list and updates a "average" field when required.
Create a calculated field to give you the year and month, for example: 2011-07. Then modify your list view to group on the calculated field. When editing your view, there is also an option to display totals, I believe you can set this to average for your AccOpen column. If you're not interested in the details, you can choose to collapse all groups by default.

Resources