I am creating an automated email that sends everyday on a schedule. Within this 4 reports are created using individual sql queries that then generate csv files with essentially a list of names - all this works fine. When some of these are generated they are empty so I have used parallel branches to set a variable to true or false if the csv is created or not.
So at the end of this I have 3/4 files and a variable for each stating whether or not it has been created.
What I want to know is how do I only attach a file if a variable is true as if a null/not created csv is attached it errors.
Any additional information required just ask.
Thanks
What I want to know is how do I only attach a file if a variable is true as if a null/not created csv is attached it errors.
When you use parallel branches, each branch is executed regardless of whether the variable is true/false or if the file is created/not. In this case, you can use the Condition action instead of parallel branches where the email is sent only when the variable is true. Below is the flow of my Logic app.
Related
I am trying to resolve this error in Power Automate:
Actions in this flow may result in an infinite trigger loop.
Please ensure you add appropriate conditional checks to prevent this flow from triggering itself.
This seems to be a common problem when using this trigger:
When an item is created or modified
where the associated flow contains this action:
Update item
The dynamic is succinctly explained in these videos here and here.
Desired Behaviour
The desired behaviour is that the flow:
Runs when a list item is created or modified by a user (and updates the item accordingly)
But not when the flow itself updates the item
Actual Behaviour
The actual behaviour is that the flow:
Runs when a list item is created or modified by a user (and updates the item accordingly)
Also runs when the flow itself updates the item (causing an infinite loop)
What I've Tried
Some posts suggest using a Service Account to run the flow and then apply the logic:
If the flow was triggered by a Service Account, terminate the flow
But I do not have access to a service account in this scenario.
The simplest solution seems to be answers like this one and this one.
They suggest creating a column in the SharePoint List to store a 'flag'.
I understand the concept of using flags to indicate:
DO run the flow if SOME_FLAG is false
DO NOT run the flow if SOME_FLAG is true
But I am having trouble when it comes to implementing them properly in this scenario.
Specifically, I have added a Yes/No column to my List called LastModifedByFlow.
The default value is No (i.e. false).
My flow is structured like this:
01) TRIGGER: When an item is created or modified
02) I have added this Trigger Condition
#equals(triggerBody()?['LastModifiedByFlow'],false)
03) This means the flow will run when LastModifiedByFlow is false
04) Create some variables
05) ACTION: Update the item - this includes setting the LastModifedByFlow value to true
Question
The first time the flow runs, it works great:
the item is updated
the item's LastModifedByFlow value is set to true so the flow doesn't run again
But how and where in the flow do I set the LastModifedByFlow value back to false?
So that the flow will run each time a user subsequently modifies the list item?
It is recommended that you set a default value of “Yes” for the Yes/No column first. When the item is created, the flow will be triggered. After the flow is completed, the value of the column "Yes" will become "No". When the item is modified, it is recommended to manually change "No" to "Yes" to trigger the flow. Here is a post for your reference.
I need to trigger my ADF pipeline when three files arrives in paths : container/folder1/file1.parquet
container/folder2/file2.parquet
container/folder3/file3.parquet
Only when these 3 subfolders gets new files(files will be overwritten) should the ADF pipeline trigger.
How can we achieve this?
Update : This should be an AND condition, ie - the pipeline should be triggered only after all 3 files arrive or gets updated.
Update:
There is no out-of-the-box feature to achieve this, you can share your idea here
What you can do is
Setup a Storage event triggered pipeline on first destination i.e. container/folder1/file1.parquet as I have explained earlier.
Then after maybe waiting for few secs using WaitActivity use Get Metadata activity with Field list
Argument as Child items to get list of files in the folder
or
LookupAcitivty chain to look for files at container/folder2/file2.parquet and container/folder3/file3.parquet with file list path property. File list examples
Then you can hold the results in variables for convenience and using Conditional activities like IfActivity compare to see if all the files exist, if True you can proceed with further activities you plan to design in the pipeline when the three files arrived.
In case of explicit availability or 3 files seperately..
You can simply use 3 different new triggers for same pipeline. Each with different folder as its Blob path ends with property in trigger.
Here is sample trigger for first folder i.e. container/folder1/file1.parquet
You can also mention like .parquet using patterns to match files with different names dynamically.
Note: That Blob path begins with and ends with are the only pattern matching allowed in Storage Event Trigger.
Similarly you can create 2 more for container/folder2/file2.parquet and container/folder3/file3.parquet
I am trying to create a dependency pipeline for files before executing my model refresh (web activity) I want to make sure all the related files are there in their respective folders and all files are latest.
Suppose, my model refreshes uses the following file present in adls-
myadls/raw/master/file1.csv
myadls/raw/dim/file2.csv
myadls/raw/dim/file3.csv
myadls/raw/reporting/file4.csv
We need to compare the files last modified with today's date. If both are equal then files are the latest. If any of the files is not the latest then I need an email with the file name that is not the latest and I shouldn't trigger my web activity which usually does model refresh.
I have created this pipeline using get metadata, for each activity, If-condition, web activity, and Set variable activity. But the problem is I am not able to get an email for the file which is not the latest.
How I can get an email for a file which is not the latest file according to my scenario?
Note, the above folders can have more than 100 files, but I am only looking for specific files I am using in my model.
We use SendGrid API to send emails at my company.
You can easily pass the FileNames in the body of the email using any email API out there. You can write the FileNames to a variable then reference the variable in the body. It sounds like you have built almost everything out, so within your ForEach, just have an Append to Variable step that writes a new value to your array variable. Then you can use those array values in your SendEmail Web Activity, or use a string conversion function, there are many ways to do it.
I will update this post with example later
As per your current arch ,you can create variables per foreach activity that would store the file name .
So within foreach activity, in case if the file is not latest using append variable activity
you can save all file names.
and then in the final validation, you can concat all for each loop variables to have the final list of files that are not modified.
But ideally I would suggest the below approach :
Have the list of files created as a lookup activity output.
Provide that to a single foreach activity in sequential execution.
within foreach via IF activity and getmeta data activity, check whether the file is latest or not.
If not via append variable activity append the file name.
Once out of foreach, via If condition check whether the file name variable is blank or has some values.
If it has values, then you can send an email and the filename variable has all the non updated file names
I'm currently busy with an test automation project for mobile devices and I want to store my results in a database.
I do this with a custom assertclass which I created, this one stores the result.
The layout is as following -> There are runs which contain Tests which contain Steps.
When I want to run in parallel, I create a unique id for the run based on thread and date and store this one in a static list. This is #BeforeTest.
At the #BeforeClass, I search in the list for the right thread and assign the correct name to the variable of the class.
But I receive constantly a Nullpointerexception for the list in the beforeclass or the variable is null while it was filled in in the #beforetest.
What are we missing here? Or how is this done when you need to store a parameter that is generated once for the whole run?
Thanks in advance,
Jonas Vandevelde
I am using the test plugin for VS 2012 (although have just installed 2013), and need to know:
Is it possible to have a parameter pass a different value from a selected list while load testing?
I have used the sample load test located here: http://www.visualstudio.com/get-started/load-test-your-app-vs and created a new web test that meets my needs as below.
I have a simple journey recorded that is an email registration web page. The journey is essentially completing name & address, email, conf email, password, conf password. On submission of the form, a verification email is sent.
I need to check that this process can handle around 3000 users. The email to actually send the verification has been hardcoded for test purposes, but I need a unique email to submit the form. I would essentially like to run 3000 test cases through, and just change the email address each time.
What is the best way to do this?
The simple answer is do a web search for data driving (or data driven) Visual Studio web performance tests. You should find many articles and tutorials.
In more detail:
Outline of how to data drive a test
Firstly, Visual Studio distinguishes different types of test. A Load Test is a way of running individual test cases many times, as if by many simultaneous users, gathering data about the test executions and producing a report. The test cases that a load test can execute include Web Performance Tests and Coded UI Tests; both of these can be data driven.
Data driving a Web Performance Test requires a data source. The data can be CSV, XML, Spreadsheet, database and in TFS. I will describe using CSV.
Create a CSV file, containing something similar to the following. Note that the top line of field names is required and those names are used within the test.
Name,Email,Telephone
Fred,fred#example.com,0123 456789
George,george#example.com,0123 456790
Harry,harry#example.com,0123 456791
See also CodedUI test does not read data from CSV input file for some notes CSV file creation.
Open the test project in Visual Studio and open the .webtest file for the test. Use the context (right-click) menu of the top node of the test, ie the test's name (or use the corresponding icon) and select "Add data source ...". Follow the prompts to add the CSV file into the project.
Within the Web Performance Test expand the request to show the form parameters or query string or whatever that is to use the data. View the properties panel of the relevant field and select the appropriate property, in many cases it is the Value property. Click the little triangle for choosing a value for the property. The popup should show the data source, expand the items shown and select the required field. After selecting the field the property will show a value such as {{DataSource1.FileName#csv.Email}}. The doubled curly braces ({{ and }}) indicate the use of a context parameter. All the used data source fields are available as context parameters. All of the data source fields can be made available by altering the Select Columns property of the data source file. Data source field can be used as part of a property value by using values such as
SomeText{{DataSource1.FileName#csv.Email}}AndMoreText
Data source access methods
The data from the datasource can be read and used in four ways. The default is Sequential. Other orders are selected using Solution Explorer to access the properties of the file (eg FileName#csv). The Access Method property can be set to one of:
Sequential data is read sequentially through the file. After the last line of the file is read, the first line of the file will be next line to be read. Thus each line may be read more than once.
Random data is read randomly.
Unique data is read sequentially through the file. After the end of the file is read the test will not be executed again. Thus each line in can only be read once.
Do not move cursor automatically intended for more complex tests where the cursor is moved via calls from plugins.
A web test may use more than one data source file. These files may have different access methods. For example one file containing login names and passwords could be accessed Sequentially and another file with other data could be accessed Randomly. This would allow each login to try many different sets of the other data.
Data sources and loops
Web performance tests may contain loops. The properties of a loop include Advance data cursors. This allows, for example, a data source file to contain items to be found and added to a shopping basket such that each loop iteration adds a new item.