How can I trigger a gitlab pipeline using gitlab event triggers - gitlab

May I know which steps to produce to have a Gitlab pipeline triggered by the Gitlab events

Note that it is more the job which can run according to an event (like a push with only:refs) or rules:.
For the pipeline itself, check the different types of pipelines, which hint at different event:
schedule/cron-like event
merge request event
merge result event
API event for trigger the pipeline
...
The OP anonymous adds in the comments:
It is resolved now: I just deleted the file "toml" and register again the runner and then run jobs.

Related

Is there a way to keep the Azure DataFactory from reporting a pipeline as failed when only one activity has failed?

I have created a pipeline in Azure DataFactory that comprises of multiple activities, some of which are used as fallbacks if certain activities fail. Unfortunately, the pipeline is always reported as "failed" in the monitor tab, even if the fallback activities succeed. Can pipelines be set to appear as "succeeded" in the monitoring tab even if one or more activities fail?
Can pipelines be set to appear as "succeeded" in the monitoring tab
even if one or more activities fail
There are 3 ways to handle this mechanism
Try Catch block This approach renders pipeline succeeds, if Upon Failure path succeeds.
Do If Else block This approach renders pipeline fails, even if Upon Failure path succeeds.
Do if Skip Else block This approach renders pipeline succeeds, if Upon Failure path succeeds.
Using above approach you can get success status even if one or more activities fail.
Reference - https://learn.microsoft.com/en-us/azure/data-factory/tutorial-pipeline-failure-error-handling

Azure data factory pipeline failure trigger execute only last pipeline (Add Screenshot)

As you can see, the alarm is not triggered when copy 6, the last pipeline, succeeds.
I want each copy pipeline to work regardless of the success or failure of the previous work.
But if fails, I want it to trigger Webhook to send me an alarm.
Now, even if I connect the fail line (red line) to the web hook, it doesn't trigger the web hook from copy 1 to copy 5.
If you look at capture 2, you can see that there is no web hook in the execution list because copy 6 is successful.

Azure batch service add webhook on job completion

I've set up a batch service for media file encoding with ffmpeg. Each job can contain multiple tasks, each task will encode one file. I use the task specific resource- and output-file system, so the batch service automatically fetches and delivers the files from and to the blob storage.
However: how do I know that a job or task has completed or failed?
Since the job can take very long - even more so on low priority nodes - I need some sort of webhook or event. Continuous polling on the job status is not viable.
The options I could think of:
after running the ffmpeg command, connect a curl command. Something like:
"commandLine" : "/bin/bash -c "ffmpeg -i inputFile outputFile &&
curl https://my-webhook-receiver.org ""
Technically it works, but I'm worried about timing. The curl request is probably(?) done before the batch service pushes the result file back to the blob storage. If it's a big file, and it takes maybe half a minute to upload, I will get notified before the file exists on the output container.
Use the blob storage event system.
This has the advantage that the result file obviously must've have arrived. However, what if the job failed? It won't get triggered ever then...
Batch alert system. You apparently can create alerts for certain batch events (e.g. task completion) and you can hook it up to an action group and finally a webhook. Is that the right call? It feels kinda hacky and not the right way to use this system.
Isn't there a way to connect azure batch with e.g. azure event grid directly?
What is the "correct" way to let my server know, the encoded file is ready?
There are a few ways to handle this, although admittedly some of these solutions are not very elegant:
Create a task dependency on each task. The dependent task is the one that invokes the webhook. You can make it such that the dependent task is invoked even if the task it is depending on fails with certain exit codes. You can also create a "merge task" that is dependent on all tasks in the job that can let you know when everything completes.
Use a job manager task instead. Job managers are typically used to monitor progression of a workflow and spawn other tasks, so you would be able to query status of task completion (success or failure) and send your webhook commands via this task or a task spawned by the job manager.
Use job release mechanisms to run actions when a job completes. This does not solve your per-task notification problem, but can be used as a job completion signal.

Can i execute an on-demand web job from a scheduled webjob?

I need to execute a long running webjob on certain schedules or on-demand with some parameters that need to be passed. I had it in a way where the scheduled webjob would put a message on the queue with the parameters and a queue message triggered job would take over - OR - some user interaction would put the same message on the queue with the parameters and the triggered job would take over. However for some reason the triggered function never-finishes - and right now i cannot see any exceptions being displayed in the dashboard outputs (see Time limit on Azure Webjobs triggered by Queue)
I m looking into whether I can execute my triggered webjob as an On-demand webjob and pass the parameters to it? Is there anyway to call an on-demand web job from a scheduled web job and pass it some command line parameters?
Thanks for your help!
QueueTriggered WebJob functions work very well when configured properly. Please see my answer on the other question which points to documentation resources on how to set your WebJobs SDK Continuous host up properly.
Queue messaging is the correct pattern for you to be using for this scenario. It allows you to pass arbitrary data along to your job, and will also allow you to scale out to multiple instances as needed when your load increases.
You can use the WebJobs Dashboard to invoke your job function directly (see "Run Function" button below) - you can specify the queue message input directly in the Dashboard as a string. This allows you to invoke the function directly as needed with whatever inputs you want, in addition to allowing the function to continue to respond to qeueue messages actually added to the queue.

Implementing multi-threading in workflows

I'm aware a single workflow instance run in a single thread at a time. I've a workflow with two receive activities inside a pick activity. Message correlation is implemented to make sure the requests to both the activities should be routed to the same instance.
In the first receive branch I've a parallel activity with a delay activity in one branch. The parallel activity will complete either the delay is over or a flag is set to true.
When the parallel activity is waiting for the condition to meet how I can receive calls from the second receive activity? because the flag will be set to true only through through it's branch. I'm waiting for your suggestions or ideas.
Check out my blog The Workflow Parallel Activity and Task Parallelism This will help you understand how WF works
Not quite sure what you are trying to achieve here.
If you have a Pick with 2 branched and both branches contain a Receive it will continue after you receive either of the 2 messages the 2 Receive activities are waiting for. The other will be canceled and not receive anything. The fact that one Receive is in a Parallel will not make a difference here. So unless this is on a loop you will not receive more than one WCF message in your workflow.

Resources