As I know (and Microsoft is always pointing), Runbooks will help you to do automation progress. Related to the Azure Webhook documents, it's possible to call (start/run) a Runbook, from external application using HTML POST request. and there is some simple response code to determine what is the status of post request, But seems there is no more possibility to get more response from the progress.
Am I searching wrong place and using wrong tool to make automation in Azure Could or there is some ways to send a request to Runnbok and get some response?
Extra Note: I know that it's possible to Call a Runbook from another Runbook using WorkFlow and get some responses, but the problem is if I Start a Runbook using webhook, and if there is no way to get any more response except those simple status codes, then how I can determine what is the result of my first call to do some automation? There should be some ways to get the Final result of a Runbook progress to make a decision for next step, else, Runbooks will be meaningless for automation!
Azure Automation is built as a fire and forget solution. It was the first piece in the event-driven architecture. Where something occurs on one system, and there is a call made to react to that.
The intention is that the runbook itself has all of the logic needed to act on its own behalf. That any further processing is done via that runbook firing another process, that could then go and inspect the output and make decisions based on that.
It does seem counter-intuitive initially - I have previously jumped through all sorts of hoops to make Automation more informative - but once you realise its purpose in the Azure infrastructure intention, it begins to kinda make sense.
If you are specifically looking for something you can fire and get a response from, Azure Functions would be the way to go.
Related
I have a custom handler written in Go running as an Azure Function. It has an endpoint with two methods:
POST /entities
PUT /entities
It was easy to make my application run as an Azure function: I added "enableForwardingHttpRequest": true to host.json, and it just works.
What I need to achieve: life happened and now I need to enqueue a message when my entities change, so it will trigger another function that uses a queueTrigger to perform some async stuff.
What I tried: The only way I found so far was to disable enableForwardingHttpRequest, and change all my endpoints to accept the Azure Function's raw JSON input and output, and then output a message in one of the fields (as documented here.
It sounds like a huge change to perform something simple... Is there a way I can enqueue a message without having to change all the way my application handles requests?
As per this Github document as of now custom handlers related to go lang in Azure functions having a bug and which need to be fixed.
I'm trying to use Azure Durable Functions to implement a minimal server that accepts data via a POST and returns the same data via a GET - but I don't seem able to generate the GET response.
Is it simply not possible to return the response via a simple GET response? What I do NOT want to happen is:
GET issued
GET response returns with second URL
Client has to use second URL to get result.
I just want:
GET issues
GET response with requested data.
Possible?
I'm not really sure whether durable functions are meant to be used as a 'server'. Durable functions are part of 'serverless' and therefore I'm not sure whether it's possible (in a clean way).
To my knowledge durable functions are used to orchestrate long lasting processes. So for example handling the orchestration of a batch job. Using durable functions it's possible to create an 'Async HTTP API', to check upon the status of the processing a batch of items (durable function documentation). I've wrote a blogpost about durable functions, feel free to read it (https://www.luminis.eu/blog/azure-functions-for-your-long-lasting-logic/).
But as for your use case; I think you can create two separate Azure functions. One for posting your data, you can use an Azure Blob Storage output binding. Your second function can have a GET http trigger and depending on your data you can use an blob input binding. No need to use durable functions for it :)!
Not really a direct answer to your question, but hopefully a solution to your problem!
I've created a program that handles PubSub messaging using the Google PubSub NodeJS SDK.
While developing this I noticed that the NodeJS Library and docs show two ways of retrieving active subscriptions in Google PubSub:
PubSub.subscriptions('SubscriptionName') docs
PubSub.topic('TopicName).getSubscriptions() docs
I understand that the 2nd option might only list subscriptions related to a topic, but I'm more interested in the workings behind the scene.
In my first attempt I used the 2nd option to retrieve my subscriptions and that worked while running the application, but I ran into timeouts when trying to mock the call in my unit tests and I couldn't fix it. I switched to the 1st approach which doesn't use a Promise and just returns a plain Subscription object, this did work in my unit tests just fine
Are there downsides to not using the promise based call as it might not yield the most up to date results? If not, is there a reason why there are two options and one is promise based and the other is not?
These two APIs do very different things. The first one creates a Subscription object in the client that can be used for receiving messages. It does not retrieve any state from the Cloud Pub/Sub service. Therefore, there is no asynchronous work to do and it need not return a promise. The second one actually goes to the Cloud Pub/Sub service, retrieves the list of subscriptions for the topic, and creates a Subscription object for each one.
If you know the name of the subscription for which you want to receive messages and can be reasonably confident that it exists, then use PubSub.subscriptions('SubscriptionName'). If you try to start receiving messages on this subscription by calling subscription.on('message', messageHandler); and it doesn't exist, then an error is emitted.
If you don't know the name of the subscription and instead need to fetch the list and choose the subscription from which to receive messages from the list of all subscriptions for the topic, then use the PubSub.topic('TopicName).getSubscriptions() call.
For further help with why mocking the getSubscriptions() call didn't work, would probably need to see the code you were using to mock it.
Is it possible to use dynamic content in the POST body for a scheduled job in Azure scheduler?
I am writing a logic app that I would like to be able to pass a start time and a look back minute count to so that a failed invocation can be re-run across the same underlying data by passing the same parameters. I'm wondering if there are functions or operations similar to what can be found in logic apps for values such as utcNow()
We do not support dynamic content in Scheduler, you may find some timestamp in the request header in the calls Scheduler made though.
Why are you not using Logic Apps when it can perform what you need?
I am working on code for a webserver.
I am trying to use webhooks to do the following tasks, after each push to the repository:
update the code on the webserver.
restart the server to make my changes take effect.
I know how to make the revision control run the webhook.
Regardless of the specifics of which revision control etc. I am using, I would like to know what is the standard way to create a listener to the POST call from the webhook in LINUX.
I am not completely clueless - I know how to make a HTTP server in python and I can make it run the appropriate bash commands, but that seems so cumbersome. Is there a more straightforward way?
Setup a script to receive the POST request ( a PHP script would be enough )
Save the request into database and mark the request as "not yet finished"
Run a crontab and check the database for "not yet finished" tasks, and do whatever you want with the information you saved into database.
This is definately not the best solution but it works.
You could use IronWorker, http://www.iron.io, to ssh in and perform your tasks on every commit. And to kick off the IronWorker task you can use it's webhook support. Here's a blog post that shows you how to use IronWorker's webhooks functionality and the post already has half of what you want (it starts a task based on a github commit): http://blog.iron.io/2012/04/one-webhook-to-rule-them-all-one-url.html