I am wanting to create a custom Azure Logic App which does some heavy processing. I am reading as much as I can about this. I want to describe what I wish to do, as I understand it currently, then I am hoping someone can point out where I am incorrect in my understanding, or point out a more ideal way to do this.
What I want to do is take an application that runs a heavy computational process on a 3D mesh and turn it into a node to use in Azure Logic App flows.
What I am thinking so far, in a basic form, is this:
HTTP Trigger App: This logic app receives a reference to a 3D mesh to be processed, it then saves this mesh to Azure Store and passes that that reference to the next logic app.
Mesh Computation Process App: This receives the Azure Storage reference to the 3D mesh. It then launches a high performance server with many CPUs and GPU's, the high performance server downloads the mesh, processes the mesh, then uploads the mesh back to Azure Storage. This app then passes the reference to the processed mesh to the next logic app. Finally this shuts down the high performance server so it doesn't consume resource unnecessarily.
Email notification App: This receives the Azure Storage reference to the processed mesh, then fires off an email with the download link to the user.
Is this possible? So far what I've read this appears possible. I am just wanting someone to verify this in case I've severely misunderstood something.
Also I am hoping a to get a little bit of guidance on the mechanism to launch and shut down a high performance server within the 'Mesh Computation Process App'. The only place the Azure documentation mentions asynchronous, long-term, task processing in Logic Apps is on this page:
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-create-api-app
It talks about it requiring you to launch an API App or a Web App to receive the Azure Logic App request, then ping back status to Azure Logic Apps. I was wondering, is it possible to do this in a serverless manner? So the 'Mesh Computation Process App' would fire off an Azure Function which spins up the higher performance server, then another Azure Function periodically pings that server to report back status until complete, at which point an Azure Function then triggers the higher performance server to shut down, then signals to the 'Mesh Computation Process App' that it is complete and it continues onto the next logic app. Is it possible to do it in that manner?
Any comments or guidance on how to better approach or think about this would be appreciated. This is the first time I've dug into Azure, so I am simultaneously trying to orient myself on proper understanding in Azure and make a system like this.
Should be possible. At the moment I'm not exactly sure if Logic Apps themselves can create all of those things for you, but it definitely can be done with Azure Functions, in a serverless manner.
For your second step, if I understand correctly, you need it to run for long, just so that it can pass something further once the VM is done? You don't really need that. When in serverless, try to not think of long running tasks, and remember that everything is an event.
Putting stuff into Azure Blob storage is an event you can react to, this removes your need for linking.
Your first step, saves stuff to Azure Store, and that's it, doesn't need to do anything else.
Your second app, triggers on inserted stuff to initiate processing.
The VM processes your stuff, and puts it in the store.
The email app triggers on stuff being put into "processed" folder.
Another App triggers on the same file to shut down the VM.
This way you remove the long running state management and chaining the apps directly, and instead each of them does only what it needs to do, and then apps can trigger automatically to the results of the previous flows.
If you do need some kind of state management/orchestration in all of your steps, and you want to still be in serverless, look into durable azure functions. They are serverless, but the actions they take and results they get are stored in table storage, so it can be recreated and restored to a state it was in before. Of course everything is done for you automatically, it just changes a bit on what exactly you can do inside of it to still be durable.
The actual state management you might want to do is maybe something to keep track of all the VM's and try to reuse stuff, instead of spending time spinning them up and killing them. But don't complicate it too much for now.
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview
Of course you still need to think about error handling, like what happens if your VM just dies without uploading anything, you don't want to just miss on stuff. So you can trigger special flows to handle repeats/errors, maybe send different emails, etc.
Related
I have two Azure Functions. I can think of them as "Producer-Consumer". One is "HttpTrigger" based Function (Producer) which can be fired randomly. It writes the input data in a static "ConcurrentDictionary". The second one is "Timer Trigger" Azure Function(consumer). It reads the data periodically from the same "ConcurrentDictionary" which was being used by the "Producer" function App and then do some processing.
Both the functions are within the same .Net project (but in different classes). The in-memory data sharing through static "ConcurrentDictionary" works perfectly fine when I run the application locally. While running locally, I assume that they are running under the same process. However, when I deploy these Functions in Azure Portal ( They are in the same function App Resource), I found that data sharing through static "ConcurrentDictionary" is not not working.
I am just curious to know, if in Azure Portal, both the Functions have their own process (Probably, that's why they are not able to share in-process static collection). If that is the case, what are my options that these two Functions work as proper "Producer-Consumer"? Will keeping both the Functions in the same class help?
Probably, the scenario is just opposite to what is described in the post - "https://stackoverflow.com/questions/62203987/do-azure-function-from-same-app-service-run-in-same-instance". As against the question in the post, I would like both the Functions to use the same static member of a static class instance.
I am sorry that I cannot experiment too much because the deployment is done through Azure-DevOps pipeline. Too many check-ins in repository is slightly inconvenient. As I mention, it works well locally. So, I don't know how to recreate what's happening in Azure Portal in local environment so that I can try different options? Is there any configurable thing which I am missing to apply?
Don't do that, use an azure queue, event grid, service bus or something else that is reliable but just don't try using a shared object. It will fail as soon as scale out happens or as soon as one of the processes dies. Do think about functions as independent pieces and do not try to go against the framework.
Yes, it might work when you run the functions locally but then you are running on a single machine and the runtime might use the same process but once deployed that ain't true anymore.
If you really really don't want to decouple your logic into a fully seperated producer and consumer then write a single function that uses an in process queue or collection and have that function deal with the processing.
We have a service running as an Azure function (Event and Service bus triggers) that we feel would be better served by a different model because it takes a few minutes to run and loads a lot of objects in memory and it feels like it loads it every time it gets called instead of keeping in memory and thus performing better.
What is the best Azure service to move to with the following goals in mind.
Easy to move and doesn't need too many code changes.
We have long term goals of being able to run this on-prem (kubernetes might help us here)
Appreciate your help.
To achieve first goal:
Move your Azure function code inside a continuous running Webjob. It has no max execution time and it can run continuously caching objects in its context.
To achieve second goal (On-premise):
You need to explain this better, but a webjob can be run as a console program on-premise, also you can wrap it into a docker container to move it from on-premise to any cloud but if you need to consume messages from an Azure Service Bus you will need an On-Premise-Azure approach connecting your local server to the cloud with a VPN or expressroute.
Regards.
There are a couple of ways to solve the said issue, each with slightly higher amount of change from where you are.
If you are just trying to separate out the heavy initial load, then you can do it once in a Redis Cache instance and then reference it from there.
If you are concerned about how long your worker can run, then Webjobs (as explained above) can work, however, that is something I'd suggest avoiding since its not where Microsoft is putting its resources. Rather look at durable functions. Here an orchestrator function can drive a worker function. (Even here be careful, that since durable functions retain history after running for very very very long times, the history tables might get too large - so probably program in something like, restart the orchestrator after say 50,000 runs (obviously the number will vary based on your case)). Also see this.
If you want to add to this, the constrain of portability then you can run this function in a docker image that can be run in an AKS cluster in Azure. This might not work well for durable functions (try it out, who knows :) ), but will surely work for the worker functions (which would cost you the most compute anyways)
If you want to bring the workloads completely on-prem then Azure functions might not be a good choice. You can create an HTTP server using the platform of your choice (Node, Python, C#...) and have that invoke the worker routine. Then you can run this whole setup inside an image on an AKS cluster on prem and to the user it looks just like a load balanced web-server :) - You can decide if you want to keep the data on Azure or bring it down on prem as well, but beware of egress costs if you decide to move it out once you've moved it up.
It appears that the functions are affected by cold starts:
Serverless cold starts within Azure
Upgrading to the Premium plan would move your functions to pre-warmed instances, which should counter the problem you are experiencing:
Pre-warmed instances for Azure Functions
However, if you potentially want to deploy your function/triggers to on-prem, you should spin them out as microservices and deploy them with containers.
Currently, the fastest way would probably be to deploy the containerized triggers via Azure Container Instances if you don't already have a Kubernetes Cluster running. With some tweaking, you can deploy them on-prem later on.
There are few options:
Move your function app on to premium. But it will not help u a lot at the time of heavy load and scale out.
Issue: In that case u will start facing cold startup issues and problem will be persist in heavy load.
Redis Cache, it will resolve your most of the issues as the main concern is heavy loading.
Issue: If your system is multitenant system then your Cache become heavy during the time.
Create small micro durable functions. It will be not the answer of your Q as u don't want lots of changes but it will resolve your most of the issues.
I have an azure cloud service which scales instances out and in. This works fine using some app insights metrics to manage the auto-scaling rules.
The issue comes in when the scales in and azure eliminates hosts; is there a way for it to only scale in an instance once that instance is done processing its task?
There is no way to do this automatically. Azure will always scale in the highest number instance.
The ideal solution is to make the work idempotent and chunked so that if an instance that was doing some set of work is interrupted (scaling in, VM reboot, power loss, etc), then another instance can pick up the work where it left off. This lets you recover from a lot of possible scenarios such as power loss, instead of just trying to design something specific for scale in.
Having said that, you can manually create a scaling solution that only removes instances that are not doing work, but doing so will require a fair bit of code on your part. Essentially you will use a signaling mechanism running in each instance that will let some external service (a Logic app or WebJob or something like that) know when an instance is free or busy, and that external service can delete the free instances using the Delete Role Instances API (https://learn.microsoft.com/en-us/rest/api/compute/cloudservices/rest-delete-role-instances).
For more discussion on this topic see:
How to Stop single Instance/VM of WebRole/WorkerRole
Azure autoscale scale in kills in use instances
Another solution but this one breaks an assumption that we are using Azure cloud service; if you use app services instead of the cloud service you will be able to setup auto scaling on the app service plan effectively taking care of the instance drop you are experiencing.
This is an infrastructure change so it's not a two click thing but I believe app services are better suited in many situations including this one.
You can look at some pros and cons but if your product is traffic managed this switch will not be painful.
Kwill, thanks for the links/information, the top item in the second link was the best compromise.
The process work length was usually under 5 minutes and the service already had re-handling of failed processes, so after some research it was decided to track state of when the service was processing a queue item and use a while loop in the RoleEnvironment.Stopping event to delay restart and scale-in events until the process had a chance to finish.
App Insights was used to track custom events during the on stopping event to track how often it completes vs restarts during the delay cycles.
I have a C# console application which extracts 15GB FireBird database file on a server location to multiple files and loads the data from files to SQLServer database. The console application uses System.Threading.Tasks.Parallel class to perform parallel execution of the dataload from files to sqlserver database.
It is a weekly process and it takes 6 hours to complete.
What is best option to move this (console application) process to azure cloud - WebJob or WorkerRole or Any other cloud service ?
How to reduce the execution time (6 hrs) after moving to cloud ?
How to implement the suggested option ? Please provide pointers or code samples etc.
Your help in detail comments is very much appreciated.
Thanks
Bhanu.
let me give some thought on this question of yours
"What is best option to move this (console application) process to
azure cloud - WebJob or WorkerRole or Any other cloud service ?"
First you can achieve the task with both WebJob and WorkerRole, but i would suggest you to go with WebJob.
PROS about WebJob is:
Deployment time is quicker, you can turn your console app without any change into a continues running webjob within mintues (https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/)
Build in timer support, where WorkerRole you will need to handle on your own
Fault tolerant, when your WebJob fail, there is built-in resume logic
You might want to check out Azure Functions. You pay only for the processing time you use and there doesn't appear to be a maximum run time (unlike AWS Lambda).
They can be set up on a schedule or kicked off from other events.
If you are already doing work in parallel you could break out some of the parallel tasks into separate azure functions. Aside from that, how to speed things up would require specific knowledge of what you are trying to accomplish.
In the past when I've tried to speed up work like this, I would start by spitting out log messages during the processing that contain the current time or that calculate the duration (using the StopWatch class). Then find out which areas can be improved. The slowness may also be due to slowdown on the SQL Server side. More investigation would be needed on your part. But the first step is always capturing metrics.
Since Azure Functions can scale out horizontally, you might want to first break out the data from the files into smaller chunks and let the functions handle each chunk. Then spin up multiple parallel processing of those chunks. Be sure not to spin up more than your SQL Server can handle.
We have an application that accepts file uploads from the user.
Whenever we deploy our application we stop the application process and start it again. All lengthy processing is done before we actually stop the application so the actual downtime is fairly small (a few seconds).
However, when stopping the process we also kill active requests to our application (i.e. file uploads).
What would be a good way to handle this? I have a few ideas:
Extract the file upload handler into a separate service?
Make the restart more "intelligent" and tell the processes to not accept any new requests and wait for the currently active requests to stop before killing the process
You've just listed two of essentially three solutions I can think of :-)
The third would be a multi-tier deployment with a smart load balancer and deploy process smart enough to know what node to restart and when.
If it is a smaller scale app with no significant impact, I would go to what seems to me simpler version: track active downloads and monitor this on restart. Maintain just one app, you know? But it makes the upload logic more complex.
However, if the uploads are important enough, and they seem to be, it may be worth it to extract it to a separate service. Not just because of new deploy, but also to protect you from unexpected crashes and shutdowns. You would then have to decide a way to communicate completed uploads from the service though, and also handle the client response etc.
On my view, one app to maintain and deploy is simpler then two, but of course also a bit less robust.
So the answer really depends on your needs and resources, right?