TLDR:
Is it possible to have multiple inputs to an Azure Function?
Longer Explanation:
I'm new to Azure Functions and still don't have a good understanding of it.
I have an application which downloads HTML data through a proxy web request and I was considering moving it to Azure Functions.
However, the function would require two inputs: a string URL and a proxy object (which contains IP address, username and password properties).
I was thinking of having two queues, one for URLs and one for proxies.
URLs would be added to the queue by a client application, which would trigger the function.
The proxy queue would have a limited pool of proxy objects which would be added back into the queue by the consuming function after they had been used for the web request.
So, if there are no proxies in the proxy queue, the function will not be able to create a web request until one is added back into the queue.
This is all assuming that Azure Functions are parallel and every trigger from the URL queue runs a function on another thread.
So, is what I'm considering possible? If not, is there an alternative way that I could go about it?
There can be only one trigger for a given function, i.e. the function will run when there is a new message in one specified queue.
There is an input bindings feature, which can load additional data based on the properties from triggering request. E.g. if incoming queue message contains URL and some proxy ID, and proxy settings are stored as Table Storage entities (or blobs), you could define an input binding to automatically load proxy settings based on ID from the message. See this example.
Of course, you could achieve the same without input binding, just by loading the proxy settings manually in function body based on your custom logic.
There is no way to setup a Function not to be triggered until you have messages in two queues at the same time.
Related
I have created an Application Insights resource in Azure and have it up and running.
I need to filter the data that is sent there so only data coming from a specific domain.
So, the application might be running in several places, like test and prod but I only want the logs from some of these domains.
What's the best implementation using Azure to filter all the possible requests from other domains that could "try" to send info there?
Once it was already sent to AppInsights, you cannot remove it anymore. You can only filter it out in queries so that it is not shown, but it will be stored in the underlying datastore until the TTL expires.
If you do not want the data there, you need to filter it out on the sending side - or just not send it in the first place. So why don't you simply remove the instrumentation key from your non-prod environments?!
The better solution, however, would be to have separate AppInsights instances for each env.
I need to make 6 HTTP request using trigger in one logic resource only. How can I make multiple HTTP request from logic app. Also Azure logic app showing error "This session has timed out. To see the latest run status, navigate to the runs history blade."
All the http request should be independent of each other.
If any documents is available for this please share.
Every logic app workflow starts with a trigger, which fires when a specific event happens, or when new available data meets specific criteria.
Each time that the trigger fires, the Logic Apps engine creates a logic app instance that runs the actions in the workflow.
For HTTP trigger , follow the doc Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps
YES. This is a very basic pattern.
How? Well, you just do. If it's exactly 6 calls, you can use 6 HTTP Connectors. If it's variable, you have to loop on whatever incoming set you have and put the HTTP Connector in the loop.
All outbound request are independent of each other.
We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:
Request Queue
Response Queue
Webserver will push a message to request queue and subscribe to response queue.
By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.
But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances.
We are wondering if this pattern will work here optimally.
To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).
This will have two down sides:
Along with increasing/decreasing web server instance, we will have
to create/delete topic as well.
All the message will be pushed to
all the topics. So, every message will be processed by all the web
servers. And this is not an efficient way.
Please share your thoughts.
Thanks In Advance
When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.
For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.
Note that ideally you should avoid instance affinity as much as you can.
I know this is old, but thought I should comment to complete this thread.
I agree with Sean.
In principle, Do not design with instance affinity in mind.
Any design should work irrespective of number of instances and whichever instance runs the code.
Microsoft does recommend the same when designing application architecture for running in the cloud.
In your case, I do not think you should plan to have one topic for each instance.
You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages.
When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern.
https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
Please post what you have finally implemented.
I have an asp.net mvc 5 website running in an azure app service. My site allows customers to communicate via email uploading documents if required.
I modeled this as sending email with attachments (max 4mb) using sendgrid with azure webjob.
I cannot use an azure queue since the message size is way too small.
Therefore I have to communicate with a triggered webjob via kudu. I've read the docs and the argument seems to be a simple string which I can either read it off of the arguments or WEBJOBS_COMMAND_ARGUMENTS environment variable.
My poco class to send email has customer properties (mostly string) + the file the user uploads is of type HttpPostedFileBase.
How do I pass this poco class to the triggered webjob via kudu?
Should I json serialize it and pass it as a string?
Any other options?
I need help.
I cannot use an azure queue since the message size is way too small.
That's not a limitation on Azure Queues.
Should I json serialize it and pass it as a string?
That's basically what the azure queues is doing.
My suggestion is if you want to use Azure WebJobs to send an email, you can just send a message to a queue with the appropriate payload. When you listen to the queue you can get the attachment (maybe store it as a blob?) and send it using SendGrid.
When you listen to the message you can either get a string or POCO object. If you're going to get a POCO, I suggest not to send the HttpPostedFileBase property since that can make the message too large (that is a limitation on the message).
Hope this helps
I have been struggling with this concept for a while. I am attempting to come up with a loosely coupled Azure component design that is completely scalable using Queues and worker roles, which dequeue and process the items. I can scale the worker roles at will, and publishing to the queue is never an issue. So far so good, but, it seems that the only real world model this could work in is fire and forget. It would work fantastic for logging and other one way operations, but let's say I want to up load a file using queues/worker roles, save it to blob, then get a response back once it is complete. Or should this type of model not be used for online apps? What is the best way to send a notification back once an operation is completed? Do I create a response Q, then (somehow) retrieve the associated response? Any help is greatly appreciated!!!!!
I usually do a polling model.
Client (usually a browser) sends a request to do some work.
Front-end (web role) enqueues the work and replies with an ID.
Back-end (worker role) processes the queue and stores the result in a blob or table entity named .
Client polls ("Is done yet?") at some interval.
Front-end checks to see if the blob or table entity is there and replies accordingly.
See http://blog.smarx.com/posts/web-page-image-capture-in-windows-azure for one example of this pattern.
you could also look into the servicebus appfabric instead of using queues. with the servicebus you can send messages, use queues etc all from the servicebus appfabric. you could go to publish and subscribe instead of polling then!