Bitbucket Pipeline call external service and receive results - bitbucket-pipelines

I want a Bitbucket Pipeline calling an external custom service, and update pipeline's status depending on the response. A call takes tenth of minutes, so using "sleeping" tricks have no sense, on my opinion.
I tried Bitbucket REST API for generating "FAILED" report, but it doesn't invalidates already green pipeline, which is assumed as completed after successful service call. I expect usual pipeline's behavior on both success and error.
Is there any kind of solution for the problem?

Related

Is there anyway to get the current state of job within the steps in github actions?

I have been working on the github actions and I need to send the message about the reason of failure on the external webhook. I tried to use the octokit rest js in the nodejs action, but Since the job isn't completed, I am unable to fetch the reason of failure of current job. Is there anyway I can achieve this?

Is there any way to know and get only the log for the stage that had an error on Azure DevOps pipelines using REST API?

I see there's an REST endpoint to get the log by its ID ( https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get%20build%20log?view=azure-devops-rest-5.1 )
And from what I see there's a log for each step and then the last log contains all steps. Is this always how the response will be? Is this documented somewhere that last log will always be the full log?
And is there a way to know which log to get for the stage the failed the build? As I would need only the one that caused the build to fail and not all of them.
No, there is no such kind of way to achieve what you need in Azure DevOps Service at present.
And from what I see there's a log for each step and then the last log contains all steps. Is this always how the response will be?
There is also a parameter called startLine and endLine, with specify them, you could fetch a part of entire build log. But this is not useful in your scenario.
I'm afraid you have to either download full logs or get the task log of a release.

Confused if WebJob is failing or not

Just created a new WebJob that runs every two hours and I'm confused if it's succeeding or failing.
It makes an API call to a third party app and processes some data internally based on the API call results. Currently, no data available on third party API so I can't tell if it's truly failing or not.
As you can see, the bird's eye view shows it as failing -- see below:
If I click the particular run that seems to have failed, I get this screen which is even more confusing. At the top, it says failed but right next to the actual run, it shows success. Which is the case here?

How to get a response of an Azure Runbook started with webhook?

As I know (and Microsoft is always pointing), Runbooks will help you to do automation progress. Related to the Azure Webhook documents, it's possible to call (start/run) a Runbook, from external application using HTML POST request. and there is some simple response code to determine what is the status of post request, But seems there is no more possibility to get more response from the progress.
Am I searching wrong place and using wrong tool to make automation in Azure Could or there is some ways to send a request to Runnbok and get some response?
Extra Note: I know that it's possible to Call a Runbook from another Runbook using WorkFlow and get some responses, but the problem is if I Start a Runbook using webhook, and if there is no way to get any more response except those simple status codes, then how I can determine what is the result of my first call to do some automation? There should be some ways to get the Final result of a Runbook progress to make a decision for next step, else, Runbooks will be meaningless for automation!
Azure Automation is built as a fire and forget solution. It was the first piece in the event-driven architecture. Where something occurs on one system, and there is a call made to react to that.
The intention is that the runbook itself has all of the logic needed to act on its own behalf. That any further processing is done via that runbook firing another process, that could then go and inspect the output and make decisions based on that.
It does seem counter-intuitive initially - I have previously jumped through all sorts of hoops to make Automation more informative - but once you realise its purpose in the Azure infrastructure intention, it begins to kinda make sense.
If you are specifically looking for something you can fire and get a response from, Azure Functions would be the way to go.

Deploying a test web app for each GitHub pull request

Is it possible for GitHub to trigger a new test deployment when a pull request is submitted? I'd like for it to create a new folder on the server (Azure preferred) so that a test URL (e.g. http://testserver.com/PR602/) is generated that we can refer to in the pull request.
This would allow anyone to test a pull request without having to clone the repo, check out the branch, and build it locally.
In my initial research I found that Travis CI can deploy all branches, but I'm not clear how this would be triggered. Do I have to write a custom app that's triggered by pull request web hooks? I'm hoping someone has discovered a simpler method.
Do I have to write a custom app that's triggered by pull request web hooks?
Yes, or find someone else who has happened to have written the exact webhook handler you need.
Writing a webhook handler isn't terribly much work. If you don't want to integrate it with your current app, you can use a micro-framework like Flask to do this in only a few lines of code.
Coming back to this in 2022, there is now also the option of Github Actions, which is a first-party CI service. Actions provides a framework for defining what things to do when certain triggers happen, and there's an extensive marketplace of drop-in components, so you may be able to do all of your triggering of other systems without writing any custom code or running a webserver to listen to webhooks.

Resources