Forcing Gitlab CI pipeline status - gitlab

I have the following Gitlab CI pipeline :
--- job 1 ---- job 2 ---- job 3
I implemented a workflow where even if job 1 fails, job 2 runs and if it succeeds job 3 also runs.
Problem is, the pipeline shows up as failed because job 1 failed ( even though job 2 and 3 succeeded ). I can't allow the failure of job 1.
Is there anyway I can instruct the pipeline to have the same status as a specific job or force its success even though only one job failed ?

Related

How to know weather ADF pipeline is in queue dynamically

Each pipeline instance is running every one hour.
If my pipeline is in the queue I want to set an alert mail.
How to know whether ADF pipeline is in queue dynamically
How to know whether ADF pipeline is in queue dynamically
A pipeline run has different status during its lifecycle, the possible values of run status are listed below:
• Queued
• InProgress
• Succeeded
• Failed
• Canceling
• Cancelled
To monitor the pipeline run, add the following code:
# Monitor the pipeline run
time.sleep(30)
pipeline_run = adf_client.pipeline_runs.get(
rg_name, df_name, run_response.run_id)
print("\n\tPipeline run status: {}".format(pipeline_run.status))
filter_params = RunFilterParameters(
last_updated_after=datetime.now() - timedelta(1), last_updated_before=datetime.now() + timedelta(1))
query_response = adf_client.activity_runs.query_by_pipeline_run(
rg_name, df_name, pipeline_run.run_id, filter_params)
print_activity_run_details(query_response.value[0])
You can get status using status parameter.
For more information follow this document.

Azure Machine Learning - Every pipeline run is canceled automatically after 30 minutes

since 5 days every pipeline run is canceled (not failed) automatically after round about 30 minutes.
The pipeline stage shows the error message:
Response status code does not indicate success: 403 (Identity does not have permissions for Microsoft.MachineLearningServices/workspaces/experiments/runs).
Microsoft.RelInfra.Common.Exceptions.ErrorResponseException: Identity does not have permissions for Microsoft.MachineLearningServices/workspaces/experiments/runs/read actions.
runs
I verified that my user has the rights of the role (owner and contributor). So every write/ read access should be there.
access rights
I created a completly new machine learning ressource and tried with different users (role contributor).
The error message does not make sense, because the pipeline starts and runs for 30 minutes (every user right is there?!) and is canceled automatically after some time. If I re-run the pipeline step from the ML studio, the step succeed.
I am working with Azure Machine Learning for 4 months and everything went fine.
The pipeline is created with a local python SDK.
The abonnement contains the payed version.
EDIT:
The 70_driver_log log does not show any error message - it stops after printing some own messages.
70_driver_log.txt
7%|█ | 10388/139445 [12:55<2:00:20, 17.87it/s]
7%|█ | 10390/139445 [12:55<3:04:23, 11.67it/s]
executionlogs.txt:
[2021-05-25 12:00:47Z] Job is running, job runstatus is Running
[2021-05-25 12:02:49Z] Cancelling the job

How to make partially succeeded stage to failed in Azure DevOps?

I'm running newman tests on the release pipeline but when tests have error it just mark the stage as partially succeeded, but I want to make it failed to trigger the Auto-Redeploy Trigger. Is this possible?
If you are using powershell tasks for trigger newman tests, you can set task result state with below commands.
Write-Host "##vso[task.complete result=Succeeded;]"
Write-Host "##vso[task.complete result=SucceededWithIssues;]"
Write-Host "##vso[task.complete result=Failed;]"
I want to make it failed to trigger the Auto-Redeploy Trigger. Is
this possible?
To make the stage to fail:
Yes, it's possible. When we enable the Continue on error option for one test task, the test task and following tasks would continue to run even when there's error in test task. And the stage which contains the test task will be marked with partial succeeded, so unchecking/disabling the Continue on error would make the stage to fail instead of partially succeed.
To make auto-redeploy:
More details about auto-redeploy you can check this thread.

Azure Data Factory automatically re-trigger failed pipeline

I want to automatically Re-trigger a failed pipeline using the If Condition Activity (dynamic content).
Process :
Pipeline 1 running at a schedule time with trigger 1 - works
If pipeline 1 fails, scheduled trigger 2 will run pipeline 2 - works
Pipeline 2 should contain if condition to check if pipeline 1 failed - This is the Issue
If pipeline 1 failed, then rerun else ignore - need to fix this
How can this be done?
All help is appreciated.
Thank you.
I can give you an idea,
For example. Your pipeline1 failed by some reasons. At this time, you can create a file to Azure Storage Blob.
(Here is an example, you can use the activities what you want to use.)
Then create the trigger2 triggered by a blob been created.
can't you do it with "Execute Pipeline" activity?
For example:
You create a simple pipeline named "Rerun main pipeline" and use "Execute Pipeline" inside that and link it to "main pipeline". Then in main pipeline, you add failure output and link it to "Rerun main pipeline".

Azure DevOps pipeline task to wait to run for another pipeline to complete

I had a question regarding Azure DevOps pipelines and tasks and was wondering if anyone could help.
I have a pipeline with a task that runs a PowerShell script. This script kicks off a separate pipeline, but once that script is run, the original task returns a "pass" (as expected) and the next task in the original pipeline begins to run.
Ideally, I would like the next task in pipeline 1 to wait until the pipeline that was kicked off by the script is complete (and returns a pass). Does anyone know of a way this can be achieved? The steps are using YAML. So far I have seen conditions to wait for other steps in the same pipeline, but nothing to stop a step from running until a completely separate pipeline is completed (and passes successfully).
Hopefully I am making sense. I can provide screenshots if that would help as well!
Instead of trigger the build with your PowerShell script, you can install the Trigger Build Task extension and use it. there you have an option Wait till the triggered builds are finished before build continues:
In YAML:
- task: TriggerBuild#3
displayName: 'Trigger a new build of Test'
inputs:
buildDefinition: Test
waitForQueuedBuildsToFinish: true
waitForQueuedBuildsToFinishRefreshTime: 60
If this option is enabled, the script will wait until the all the queued builds are finished. Note: This can take a while depending on your builds and your build will not continue. If you only have one build agent you will even end up in a deadlock situation!
According to the description, the whole process could be separated to four parts:
The task1 in Pipeline1 should trigger the task in Pipeline2 and if the task in Pipeline is not editable you might need to create a new task in it at last for next step usement
The last task in Pipeline2 should do something like create a txt file in a specific folder or any another things that could be detected by task2 in Pipeline1
The task2 in Pipeline1 should wait and listen if a txt file in the folder is created which means the Pipeline1 is completed successfully.
Run the task2
The main problem you face here is that all variables are evaluated on queue with YAML.
You could do a few things like utilise an external service such as an Azure Storage Account. Doing this you could do other stuff like write comments or statuses from the pipeline into a text file and read the values into your first pipeline.
At the end of script 1 in pipeline 1:
Do {
$File = check for storage account file
if ($File) {
$FileExists = $true
}
} until ($FileExists)
At the end of your other pipeline
Do stuff
Write stuff to file
Upload file to storage account
If you just want to wait for completion you could use azure devops cli at the end of your first powershell step. This is a good link: https://techcommunity.microsoft.com/t5/ITOps-Talk-Blog/Get-Azure-Pipeline-Build-Status-with-the-Azure-CLI/ba-p/472104 . You could run your logic on the returned status or result

Resources