email alerts in pipeline with condition using datafactory - azure

I am trying to make the sending of emails depend on the execution environment so that other business users don’t get flooded with mails coming from a lower environment i.e test or dev
The email should be only send to the business from the production.
I am using the following code as dynamic content for the if condition however this is very specific to only one environment:
#equals(pipeline().DataFactory,'AdPrd117')
However, if I can make the above dynamic content more robust for all the 3 environments then is there a way: where I can use the if condition as below :
if data factory name.
Is there a possibility that we can make the email alerts getting generated(alerts which is with the help of a notebook) for each environment(dev, test, prod) and for a specific email id that can be added to the if condition?

We set up our DataFactory with a suffix that indicates the environment (Dev, Tst, Prod).
I have the following expression that checks which environment a pipeline is running in to figure out which directory of an SFTP Server to use (Development, Test, or Production):
#if(
startswith(
pipeline().DataFactory
, 'dev'
)
, 'Development'
, if(
startswith(
pipeline().DataFactory
, 'tst'
)
, 'Test'
, 'Production'
)
)
Likewise, we use similar code to process our cubes, and to figure out which LogicApp to use to send messages (we have 1 logic app per environment).
Edit:
Based on the comments, maybe this would help you. This assumes your environments have a consistent naming convention, and your dev/test ones do not contain prd.
Create a variable called Recipient of type string in your pipeline. Then add a Set Variable component to your pipeline. Give it the name "Set recipient" on the General tab, and then on the Variables Tab, set the name to the "Recipient" variable you just created.
In the Value field, use the following:
#if(
equals(
substring(pipeline().DataFactory, 2, 3)
, 'Prd'
)
, 'MyBusinessUsers#contoso.com'
, 'MyDevelopers#contoso.com'
)
This means you now have a variable which will contain the group you want to send mails to, depending on the name of the ADF name.

Related

How to Use a Key Vault Secret in a Task Group Without Creating a Task Group Parameter

I have a task group in Azure DevOps that does the following tasks (among other tasks):
1 - Read secrets from a Key Vault
2 - Set/Update an application's setting to a secret from the key vault
[
{
"name": "Foo",
"value": "$(FooKeyVaultKey)",
"slotSetting": false
}
]
The only parameter I want in the task group given the above tasks is the name of the key vault where it takes the secrets from. The probem is, since the second task uses a variable (which is set by task 1), the task group creates a new parameter called "FooKeyVaultKey".
I tried accessing the variables in different ways, but only using parentheses works.
$(FooKeyVaultKey) - works, but creates parameter
${{FooKeyVaultKey}} - doesn't work
${FooKeyVaultKey} - doesn't work
$[FooKeyVaultKey] - doesn't work
Simple trick is to use a blank expression (microsoft calls it a macro syntax) nested in the variable expression:
$($()foo)
This way a parameter isn't created for the task group.
EDIT:
I noticed some inconsistent behavior where at times the app configuration would be set to literally $($()foo).
This is not possible what you are trying achieve. This is written in the documentation
All the '$(vars)' from the underlying tasks, excluding the predefined variables, will surface as the mandatory parameters for the newly created task group.
For example, let's say you have a task input $(foobar), which you don't intend to parameterize. However, when you create a task group, the task input is converted into task group parameter 'foobar'. Now, you can provide the default value for the task group parameter 'foobar' as $(foobar). This ensures that at runtime, the expanded task gets the same input it's intended to.

'Delay until' finish time of 'Queue a new build' not working in Azure Logic App

I'm triggering an Azure Logic App from an https webhook for a docker image in Azure Container Registry.
The workflow is roughly:
When a HTTP request is received
Queue a new build
Delay until
FinishTime of Queue a new build
See: Workflow image
The Delay until action doesn't work in that the queueried FinishTime is 0001-01-01T00:00:00.
It complains about the wrong format, so I manually added a Z after the FinishTime keyword.
Now the time stamp is in the right format, however, the timestamp 0001-01-01T00:00:00Z obviously doesn't make sense and subsequent steps are executed without delay.
Anything that I am missing?
edit: Queue a new build queues an Azure pipeline build. I.e. the FinishTime property comes from the pipeline.
You need to set a timestamp in future, the timestamp 0001-01-01T00:00:00Z you set to the "Delay until" action is not a future time. If you set a timestamp as 2020-04-02T07:30:00Z, the "Delay until" action will take effect.
Update:
I don't think the "Delay until" can do what you expect, but maybe you can refer to the operations below. Just add a "Condition" action to judge if the FinishTime is greater than current time.
The expression in the "Condition" is:
sub(ticks(variables('FinishTime')), ticks(utcNow()))
In a word, if the FinishTime is greater than current time --> do the "Delay until" aciton. If the FinishTime is less than current time --> do anything else which you want.(By the way you need to pay attention to the time zone of your timestamp, maybe you need to convert all of the time zone to UTC)
I've been in touch with an Azure support engineer, who has confirmed that the Delay until action should work as I intended to use it, however, that the FinishTime property will not hold a value that I can use.
In the meantime, I have found a workaround, where I'm using some logic and quite a few additional steps. Inconvenient but at least it does what I want.
Here are the most important steps that are executed after the workflow gets triggered from a webhook (docker base image update in Azure Container Registry).
Essentially, I'm initializing the following variables and queing a new build:
buildStatusCompleted: String value containing the target value completed
jarsBuildStatus: String value containing the initial value notStarted
jarsBuildResult: String value containing the default value failed
Then, I'm using an Until action to monitor when the jarsBuildStatus's value is switching to completed.
In the Until action, I'm repeating the following steps until jarsBuildStatus changes its value to buildStatusCompleted:
Delay for 15 seconds
HTTP request to Azure DevOps build, authenticating with personal access token
Parse JSON body of previous raw HTTP output for status and result keywords
Set jarsBuildStatus = status
After breaking out of the Until action (loop), the jarsBuildResult is set to the parsed result.
All these steps are part of a larger build orchestration workflow, where I'm repeating the given steps multiple times for several different Azure DevOps build pipelines.
The final action in the workflow is sending all the status, result and other relevant data as a build summary to Azure DevOps.
To me, this is only a workaround and I'll leave this question open to see if others have suggestions as well or in case the Azure support engineers can give more insight into the Delay until action.
Here's an image of the final workflow (at least, the part where I implemented the Delay until action):
edit: Turns out, I can simplify the workflow because there's a dedicated Azure DevOps action in the Logic App called Send an HTTP request to Azure DevOps, which omits the need for manual authentication (Azure support engineer pointed this out).
The workflow now looks like this:
That is, I can query the build status directly and set the jarsBuildStatus as
#{body('Send_an_HTTP_request_to_Azure_DevOps:_jar''s')['status']}
The code snippet above is automagically converted to a value for the Set variable action. Thus, no need to use an additional Parse JSON action.

Read webhook payload in Gitlab CI

I have a project (PROJECT_A) that is triggered through a webhook, and expects the variable $PRODUCT to be set. Its value is used to trigger a certain path in the build. The job in the .gitlab-ci.yml file looks like this:
deploy:
stage: publish
script:
- ./generate_doc.sh $PRODUCT
A webhook call looks like this:
http://<GITLAB_URL>/api/v4/projects/710/ref/master/trigger/pipeline?token=<TOKEN>&variables[PRODUCT]=<PRODUCT>
I call this trigger through a webhook from other projects, including PROJECT_B.
So I manually filled in the desired value in the respective webhooks, e.g. for PROJECT_B:
http://<GITLAB_URL>/api/v4/projects/710/ref/master/trigger/pipeline?token=<TOKEN>&variables[PRODUCT]=PROJECT_B
When the pipeline in PROJECT_A is triggered, $PRODUCT has the value PROJECT_B, as expected.
I would like to parameterize the pipeline further and take, among others, the commit message into account. All the information I need is apparently provided in the webhook payload.
Is there a built-in way to read this payload in a pipeline? Or alternatively, put contents of the payload into a variable in the webhook like this:
http://<GITLAB_URL>/api/v4/projects/710/ref/master/trigger/pipeline?token=<TOKEN>&variables[COMMIT_REF]=???
I have found discussions about doing parameterized Jenkins builds using the webhook payload, including this related question. There is also a similar question in the Gitlab forum, without any answer.
Is there a way to do access that payload in a Gitlab CI pipeline? I could probably extract the provided values with a jq call, but how can I get the Json in the first place?
If you run compgen -v to show the environment variables when triggering the pipeline in the UI (without JSON payload) you get 3 fewer lines in your job log than when POSTing a JSON payload.
The additional variables are:
CI_BUILD_TRIGGERED
CI_PIPELINE_TRIGGERED
TRIGGER_PAYLOAD
If you print their values out and re-run the pipeline:
echo CI_BUILD_TRIGGERED=$CI_BUILD_TRIGGERED
echo CI_PIPELINE_TRIGGERED=$CI_PIPELINE_TRIGGERED
echo TRIGGER_PAYLOAD=$TRIGGER_PAYLOAD
You get (for username YOUR_USER_NAME and repo name YOUR_REPO_NAME)
CI_BUILD_TRIGGERED=true
CI_PIPELINE_TRIGGERED=true
TRIGGER_PAYLOAD=/builds/YOUR_USER_NAME/YOUR_REPO_NAME.tmp/TRIGGER_PAYLOAD
So as you can see the payload is stored as TRIGGER_PAYLOAD in a temporary directory suffixed .tmp, which re-running the pipeline and printing it out (cat) shows it contains the payload, in my case that’s JSON.

How to assign individual attributes to Any Logic Agents

I would like to solve the following issue:
agent based model with a population of 500 agents
each agent gets assigned with an ID number using a variable called v_agentID using the order v_agentID++; after being created
The agent should then be further processed based on a condition monitoring the individual waiting time
How can I assign individual attributes like waiting times (as a result of the calculation waitingTime=waitingTimeEnd-waitingTimeStart)to each individual agent?
Thanks a lot for your help.
Bastian
Many ways:
1) create a cyclical event on the individual agent that calculates waitingTime with the formula you provided
2) create a dynamic variable for each agent and make it equal to waitingTimeEnd-waitingTimeStart
3) create the variable whenever you want and change it in all the agents:
for(Agent a : agents){
a.waitingTime=a.waitingTimeEnd-a.waitingTimeStart;
}
4) Find the agent with the id you want and assign the variable to it
Agent theAgent=findFirst(agents,a->a.id=theIdYouWant);
theAgent.waitingTime=theAgent.waitingTimeEnd-theAgent-waitingTimeStart;
5) If you know the index of the agent just do
agents.get(index).waitingTime=agents.get(index).waitingTimeEnd-agents.get(index).waitingTimeStart

Selecting endpoints dependent on which level to run tests

I have a bit of structural dilemma in soap. When running tests, it can be possible to run tests at project, test suite or test case level.
Now currently what happens is that we can run a whole project via project level and it will display a prompt box to select an endpoint (through a project level setup script and produces a project report using the project level tear down script).
However, it may be possible that the tester may not want to run a whole project and only wants to run a test suite or even a test case. Now it may be possible that the tester may only want to run only a test suite or even only a test case. Now it would be a hassle disabling suites or cases you don't want to run.
Now the problem i have is that if I start putting prompt boxes to select endpoints at suite or case level, everytime we hit a suite or case, it will always ask for an endpoint. Another thing is that I am thinking not creating suite or test case reposts because if running many suites or cases one by one, it is just an overkill on reporting.
I like your thinking on this, but I was speaking with my professional colleague and what we're thinking is this:
Add the below code for all test suites and test case level in their relevant setup scripts where it asks for endpoint (this is same code used in project set up script for selecting endpoint):
import com.eviware.soapui.support.*
def alert = com.eviware.soapui.support.UISupport
def urls = []
project.properties.each
{
if (it.value.name.startsWith("BASE_URL_"))
{
urls.push(it.value.name.replace("BASE_URL_", ""))
}
}
def urlName = alert.prompt("Please select the environment URL", "Enter URL", urls)
if (urlName)
{
def url = project.getPropertyValue("BASE_URL_" + urlName)
def urlBase = "BASE_URL_" + urlName
project.setPropertyValue("BASE_URL", url)
switch (urlBase){
case "BASE_URL_TEST":
project.setPropertyValue("DOMAIN_NAME", "TEST");
break;
case "BASE_URL_STAGE":
project.setPropertyValue("DOMAIN_NAME", "STAGE");
break;
default:
project.setPropertyValue("DOMAIN_NAME", "NO DOMAIN");
break;
}
}
else
{
log.warn 'haven\'t received user input'
log.warn 'No base URL is selected or cancelled, try again'
assert false
}
Now what we add is the following and we may need to use properties but again see what you think is best:
If test is ran at project level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test suite or test case setup script. So it's only a single endpoint selection
If test is ran at suite level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test case setup script. So it's only a single endpoint selection
For running at test case level, well it only runs for that test case so it's at the lowest level as it asks for an endpoint for that test case.
We can't have setup scripts disabled at any level because there maybe over code in those setup script that will need to be exectued, we just need a way to say depending on which level, don't ask for selecting endpoints at lower levels.
Seems complicated to implement but does anyone know best way to implement this or do they even have a better idea than this theory?
Thanks
For a moment, let us assume you get it done for all levels (project, suite, and each case). May be you forgot about the step level ;-)
Do you have any Pros in your approach?, for me, NO.
Cons in your approach:
Each time user executes a test (be it project / suite / any test case), engineer needs to select value from the drop down, which is unwanted though testing against the same server as previous test case & little annoying.
Test execution requires manual intervention each time test execution is invoked.
User Interface is required as drop down being used.
Will be come road block / hurdle for end to end automation or to achieve automation.
Test execution can't done in headless mode. And this is important if you need to use Continuous Integration tools.
Proposed Approach :-
If I have to do the above, I would do the following. That would be clean, damn simple, no such complications would arise that you had mentioned in the long summary.
Looks there are following project properties defined with addresses of the test servers:
BASE_URL_TEST
BASE_URL_STAGE
There is also another project property defined BASE_URL and all the above logic is to allow the user to select the value from above properties to base URL value.
Now all user have to do is change the value for project property BASE_URL. I would think just user have to set one of the below value by hand what he / she needed as (one of them) before proceeding with their tests.
${#Project#BASE_URL_TEST} or
${#Project#BASE_URL_STAGE}
NOTE that a property value can be referred into another property by the use of Property Expansion like above.
With the above, user can set whatever is needed and change only if required or have to change the test server.
No setup script at any level is required any more, and just simply change the value of the property.
Properties are given to make to life simple, which can be used in N number of places and maintain the project easily.
Most Importantly, overcome the Cons mentioned in the beginning.
It is general practice that SoapUI is used to design the tests, and SOAPUI_HOME/bin/testrunner.bat or .sh utility to execute the tests in command line mode and that is the way to achieve Continuous Integration.
That's why use of properties helps here to achieve the above without any issues.
Even simple:
Just have one project property BASE_URL (remove others), user have to just edit the property value and have the test server name / IP address and is done for once, say http://testjuniper. Isn't it dead simple?
And I believe, the engineer would definitely know which server he / she is using to execute the tests.
Having said that, now user do not have to bother at all, irrespective of executing a project / suite / test case, as long as testing is carried out against the same server / environment.
Once, the test execution is finished against TEST environment, the engineer may move on to other environment say STAGING, just change BASE_URL property value accordingly.

Resources