Azure DevOps - Continue on Time out error in Pipeline Deployment - azure

If azure deployment pipeline stages fails with time out error any options available to continue with next stage ? I have tried with few option like continue with error, partial error etc all work if error occurs not on timeout error
Thanks You

For the stage trigger option : Trigger even when the selected stages partially succeed, it requires the previous stages partially succeed. If all the tasks of first stage are failed it won't trigger the second stage. It requires the first stage has at least one successful task.

Related

Job is pending, waiting for approval, then become skipped

I have an Azure pipeline where the last stage needs approval from an authorised person. The pipeline seems to work well, and when this last stage is reached the status is "Job is pending..." as expected:
The problem is that after a certain time, the job eventually turn to "skipped" status automatically, so the person who should approve doesn't have time to do so:
Unfortunately I can't find what's causing this. How would I go about debugging this issue? Is there any log I can look at that would tell us why the job is being skipped (couldn't find any such log)? If not, any idea what can transition a job from "waiting for approval" to "skipped" without us doing anything?
The problem is that after a certain time, the job eventually turn to
"skipped" status automatically.
According to your screenshot, you are using approvals and checks. When the approvers didn’t approve or reject the request until the timeout specified, it is an expected behavior that the stage will be marked skipped.
You can check the timeout setting in your resources. By default, it is set as 30 Days. You can define the timeout where you define the approvals and checks.
Please note: Maximum timeout is 30 days.
For your reference, you can find more details in the official doc: Define approvals and checks.
Azure Pipelines pauses the execution of a pipeline prior to each
stage, and waits for all pending checks to be completed. Checks are
re-evaluation based on the retry interval specified in each check. If
all checks are not successful until the timeout specified, then that
stage is not executed. If any of the checks terminally fails (for
example, if you reject an approval on one of the resources), then that
stage is not executed
.

How can I trigger a gitlab pipeline using gitlab event triggers

May I know which steps to produce to have a Gitlab pipeline triggered by the Gitlab events
Note that it is more the job which can run according to an event (like a push with only:refs) or rules:.
For the pipeline itself, check the different types of pipelines, which hint at different event:
schedule/cron-like event
merge request event
merge result event
API event for trigger the pipeline
...
The OP anonymous adds in the comments:
It is resolved now: I just deleted the file "toml" and register again the runner and then run jobs.

Is there a way to keep the Azure DataFactory from reporting a pipeline as failed when only one activity has failed?

I have created a pipeline in Azure DataFactory that comprises of multiple activities, some of which are used as fallbacks if certain activities fail. Unfortunately, the pipeline is always reported as "failed" in the monitor tab, even if the fallback activities succeed. Can pipelines be set to appear as "succeeded" in the monitoring tab even if one or more activities fail?
Can pipelines be set to appear as "succeeded" in the monitoring tab
even if one or more activities fail
There are 3 ways to handle this mechanism
Try Catch block This approach renders pipeline succeeds, if Upon Failure path succeeds.
Do If Else block This approach renders pipeline fails, even if Upon Failure path succeeds.
Do if Skip Else block This approach renders pipeline succeeds, if Upon Failure path succeeds.
Using above approach you can get success status even if one or more activities fail.
Reference - https://learn.microsoft.com/en-us/azure/data-factory/tutorial-pipeline-failure-error-handling

Azure Pipelines fails in Deployment Slots

I have created my release pipeline with few stages (DEV, QA, Production) where the Production App Service has a Deployment Slot with Auto Swap Enabled. However when I perform the release, it fails in the swapping slot tasks with the below error message. Have gone through many articles available in google and stack overflow but doesn't seem to help. Any pointers on what could be wrong would be very much helpful.
2021-08-18T16:30:41.0295503Z ##[error]Error: Failed to swap App Service 'jdmessaging' slots - 'preprod' and 'production'. Error: Conflict - Cannot modify this site because another operation is in progress. Details: Id: 32473596-226d-46b4-9c98-31285c27418e, OperationName: SwapSiteSlots, CreatedTime: 8/18/2021 4:28:43 PM, WebSystemName: WebSites, SubscriptionName: 74d83097-e9c9-4ca7-9915-7498a429def4, WebspaceName: DEMO-CentralUSwebspace, SiteName: jdmessaging, SlotName: preprod, ServerFarmName: , GeoOperationId: (null) (CODE: 409)
Note: For the first time, the release happened successfully with Deployment Slots. However, now we are trying the second release and encountered this issue.
This issue seems like more of the scenario,
One Operation triggered was yet to complete, meanwhile another operation was trigged on the same site (site modification)
Second operation was waiting for first operation to complete on the same and ultimately the second operation failed
Suggestion:
Wait for sometime and re-try the operation. It should succeed.
If still failed, please create a technical support ticker by following the link where technical support team would help you in troubleshooting the issue from platform end.

What might cause the 'InternalServerError executing request' when running a manually triggered pipeline?

The setup of the pipeline is a simple import from a .csv file stored in Azure Blob Storage to an Azure SQL database table.
When I run the pipeline in Debug by using the 'Debug' button in the portal, the job finishes in 8 seconds.
When I run the pipeline with the Add trigger\Trigger now button it runs for 20+ minutes and fails with the error 'InternalServerError executing request'.
I recreated the pipeline and components from scratch and tried using a Data Flow (Preview) and a Copy Data, both give the same result.
The expected output is a successful run of the pipeline, the actual output is the 'InternalServerError executing request' error.
The problem was with source control, which we recently enabled. The 'Add trigger\Trigger now' uses the published version of the pipeline. The Debug uses the currently saved version of the pipeline. The 20 minutes timeout and the 'InternalServerError executing request' is a poor way of saying: 'You did not publish your pipeline yet' :)
Just to add another possible cause in case someone else stumbles upon this:
I had the same error multiple times when I had many concurrent pipeline runs, in my case triggered by hundreds of new files in a OneDrive folder ("manually" triggering the pipeline via Azure Logic App). Some of the runs succeeded, some of them failed. When I reran the failed runs or loaded fewer files at once, it worked.
So the Data Factory might not be ready yet to handle parallel execution very well.
Just to add another possible cause in case someone else stumbles upon this:
Check if the data factory is down from the Resource Health tab.
I was getting Internal Server Error for all the sandbox runs.

Resources