I need to retrieve azure data factory pipelines execution logs. I've tried with Web Acticity by using the following request:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/{runId}?api-version=2018-06-01
Unfortunatelly I have the following error:
Invoking Web Activity failed with HttpStatusCode - 'NotFound', message - 'The requested resource does not exist on the server. Please verify the request server and retry'
Should I set some additionall configuration to be able to retrieve these logs?
You can get a list of pipeline runs using Pipeline Runs - Query By Factory
Next, you can Get a pipeline run by its run ID.
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/{runId}?api-version=2018-06-01
Here is a sample URL:
https://management.azure.com/subscriptions/b83c1ed3-XXXX-XXX-XXXXX-2n83a074t23f/resourceGroups/resource-grp/providers/Microsoft.DataFactory/factories/ktestadf/pipelineruns/0bdaba11-47b7-4885-9796-5801b4bb856a?api-version=2018-06-01
If you are constructing URL dynamically, using Pipeline RunID system variable, you can using string interpolation method. Notice #{pipeline().RunId} in place of {runId}.
https://management.azure.com/subscriptions/b83c1ed3-XXXX-XXXX-XXXXX-2n83a074t23f/resourceGroups/resource-grp/providers/Microsoft.DataFactory/factories/ktestadf/pipelineruns/#{pipeline().RunId}?api-version=2018-06-01
Note: You would have to Trigger run and not debug, since this will create pipeline. Make sure you have published all before trigering
run, debug can take the changes but pipeline run needs the changes be
published.
And here is a simple WebActivity setup:
Input
{
"url": " https://management.azure.com/subscriptions/b83c1td3-XXXX-XXXX-XXXXX-2b83a074c13f/resourceGroups/myrg/providers/Microsoft.DataFactory/factories/ktestadf/pipelineruns/0bdaba11-47b7-4885-9796-5801b4bb856a?api-version=2018-06-01 ",
"method": "GET",
"headers": {
"Content-Type": "application/json"
},
"authentication": {
"type": "MSI",
"resource": " https://management.azure.com/ "
}
}
You can get a pipeline run ID from here manually to test.
I did repro before posting this and the only reason this error pops is cause any value you provided is wrong or does not already exist (in case of pipeline run id)
Related
I have multiple pipelines in Azure Data factory that get data from APIs and then push it to a datalake. I get alerts in case one of the pipelines fail. I then go to the ADF instance and rerun the the failed pipeline manually. I am trying to come up with an automated way of rerunning a pipeline in case it fails. Any suggestions or guidance would be helpful. thought of Azure logic apps or powerautomate but turns out don't have the right actions in there to trigger a failed pipeline.
If the pipeline design could be modified then a method can be to
Set parameter pMax_rerun_count ( This is to ensure pipeline doesn go into indefinite loop )
set 2 variables:
(2.a) Pipeline_status default value : Fail
(2.b) Max_loop_count default value : 0 ; This would be to ensure the pipeline doesnt run in loops . The value could be set during the pipeline run to have the maximum permissible retry count (i.e. pMax_rerun_count) passed as parameter in the pipeline
All activities should be inside and Untill activity which will have expression or(equals(Pipeline_status,'Success'),equals(pMax_rerun_count,Max_loop_count)
The first activity inside Untill activity will be Set Variable activity that increment the value of variable
Max_loop_count by 1 .
The final activity insisde Untill activity will be to Set variable activity that sets Pipeline_status to "Success"
The purpose here is to run all intended activities inside untill block untill the intended activities in pipeline completes successfully . pMax_rerun_count is to ensure pipeline doesnt go into indefinite loops.
This setup can considered as a framework if all pipelines needs to rerun in case of failure
I came with a streamlined way of running failed pipelines. I decided to use the Azure Data factory API alongside Azure Logic apps to solve the problem.
I run logic apps on a scheduled run time and then use the following API commands:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/?api-version=2018-06-01
This API query gives us all the pipeine runs. If we want to filter it down to failed values, we can add the following body to it:
{
"lastUpdatedAfter": "2018-06-16T00:36:44.3345758Z",
"lastUpdatedBefore": "2018-06-16T00:49:48.3686473Z",
"filters": [
{
"operand": "status",
"operator": "Equals",
"values": [
"failed"
]
}
]
}
After getting the failed pipelines, We can then invoke the following API on each failed pipeline, to rerun them:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelines/{pipelineName}/createRun?api-version=2018-06-01
This solution can be created using a scripting language , powerautomate workflow or Azure Logic apps.
As of now there is no inbuilt method to automate the process of "rerunning from
failed activity" in the ADF, but each activity has a Retry option that
you should certainly employ. In the pipeline, you may attempt any
action as many times as necessary if it fails.
Allow the trigger to point to a new pipeline with a Execute activity that points to the current Azure Datafactory with the copy activity:
Then choose the Advanced -> Wait for completion option.
After the execute pipeline is complete, the webhook action should contain logic to halt the DW.
I am updating a QuickSight data source in my aws account.
aws quicksight update-data-source --cli-input-json file://update-stag-data-source-request.json --output json
And I get the following response:
{
"Status": 202,
"Arn": "arn:aws:quicksight:eu-west-1:<my-aws-account-nr>:datasource/099676d0-99e3-44d7-b581-d6e532e72961",
"DataSourceId": "099676d0-99e3-44d7-b581-d6e532e72961",
"UpdateStatus": "UPDATE_IN_PROGRESS",
"RequestId": "1d304a80-e507-46c3-acb3-237a58237e77"
}
So currently the status of this request is "UPDATE_IN_PROGRESS", but how do I track the status afterwards?
I need to do it, because it seems that the update fails eventually, for reasons unknown. I know that, because I still see the old setup of the data source several minutes later. I believe, if I knew the eventual request status it would help me to debug the issue.
Check the command describe-data-source, it will return the DataSource.Status and, in case of any failure, you can check inside DataSource.ErrorInfo.Message.
I have an issue where Azure Data Factory Integration runtimes will not start.
When I trigger the pipeline I get the following error in Monitor -> Pipeline runs "InternalServerError executing request"
Image 1
In "view activity run" I can see that it's the Data Flow that failed with the error
{
"errorCode": "1006",
"message": "Hit unexpected exception and execution failed.",
"failureType": "SystemError",
"target": "data_wrangling_ks",
"details": []
}
Image 2
(the two successful runs are from a Self-Hosted IR)
When i try to start "Data flow debug" it will just disappear without any information.
This issue started earlier today without any changes in Data Factory config or the pipeline.
Please help and thank you for your time.
SOLVED:
I changed the Compute type from General Purpose to Compute Optimized and that solved the problem.
By looking at the error message, it seems like this issue has occurred due ADF related service outage in West Europe region. The issue has been resolved by the product team. Please open a MSDN thread if you ever encounter this issue.
Ref: Azure Data Factory Pipeline failed while running data flows with error message : Hit unexpected exception and execution failed
I just started using nightwatch with browserstack and I'm noticing that when we get a failed test, nightwatch registers the failure, but browserstack does not.
sample test I am using.
Also I am using free trial version of BrowserStack.
My question is:
Are there any ideas how to tell browserstack when a test run failed ?
From BrowserStack doc:
REST API
It is possible to mark tests as either a pass or a fail, using the
following snippet:
var request = require("request");
request({
uri: "https://user:key#www.browserstack.com/automate/sessions/<session-id>.json",
method: "PUT",
form: {
"status": "completed",
"reason":""
}
});
The two
potential values for status can either be completed or error.
Optionally, a reason can also be passed.
My questions are:
How I can get 'session-id' after test execution ?
What if I can see "completed" status in dashboard already ?
A session on BrowserStack has only three types of statuses:
Completed, Error or Timeout. Selenium (and hence, BrowserStack) does not have a way of understanding, if a test has passed or failed. Its by the multiple assertions in your tests that appear on your console, that you infer if a test has passed / failed. These assertions however, do not reach BrowserStack. As you rightly identified, you can use the REST-API, to change the status of the session to 'Error', if you see a failure in your console.
I would suggest fetching the session ID of the test as the test is being executed, since fetching the session ID after the test execution is a lengthy process. In Nightwatch, you can fetch session ID as follows:
browser.session(function(session) {
console.log(session.sessionId);
});
Yes, you can certainly change the status of the session once it is completed. That's where the REST-API comes to help!
If you came here searching for a solution in Python, you could use
requests.put(
"https://api.browserstack.com/automate/sessions/{}.json".format(driver.session_id),
auth=(USERNAME, ACCESS_KEY),
json={"status": "failed", "reason": "test failed"})
I have an application that performs a query against Azure AD. Running a bog-standard GET for users works perfectly:
https://graph.windows.net/00000000-0000-0000-0000-000000000000/users?api-version=2013-04-05
But if I attempt a differential query:
https://graph.windows.net/00000000-0000-0000-0000-000000000000/users?api-version=2013-04-05&deltaLink=
I get the following error:
{
"odata.error":
{
"code":"Directory_BindingRedirection",
"message":{"lang":"en","value":"Tenant information is not available locally. Use the following Urls to get the information."},
"values":[
{"item":"Url1","value":"https:\/\/directory-s1-ch1.directory.windows.net"},
{"item":"Url2","value":"https:\/\/directory-s1-sn2.directory.windows.net"},
{"item":"Url3","value":"https:\/\/directory-s1-bl2.directory.windows.net"},
{"item":"Url4","value":"https:\/\/directory-s1-co1.directory.windows.net"}]
}
}
This post suggests that this may be a problem with AzureAD, rather than an expected error case that should be coded for - is this the case, or should this error be handled as per the instructions contained within..?