I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.
Related
I am looking to use GCP for a micro-services application. After comparing AWS and GCP I have decided to go with Google because one major requirement for the project is to schedule tasks to run in the future (Cloud Tasks) which AWS does not seem to offer an equivalent of.
I am planning on containerizing my services and deploying to GCP using Cloud Run with a Redis cluster running as well for caching.
I understand that you cannot have multiple Firestore instances running in one project. Does this mean that all if my services will be using the same database?
I was looking to follow a model (possible on AWS) where each service had its own database instance that it reached out to.
Is this pattern possible on GCP?
Firestore indeed is for the moment limited to a single database instance per project. For performance that is usually not a problem, but for isolation such as your use-case, that can indeed be a reason to look elsewhere.
Firebase's original Realtime Database does allow multiple instances per project, and recently added a REST API for provisioning database instances. Since each Google Cloud Project can be toggled to also be a Firebase project, you could consider that.
Does this mean that all if my services will be using the same database?
I don't know all details of your case. Do you think that you can deploy a "microservice" per project? Not ideal, especially if they are to communicate using PubSub, but may be an option. In that case every "microservice" may get its own Firestore if that is a requirement.
I don't think one should consider GCP project as some kind of "hard boundaries". For me they are just another level of granularity - in addition to folders, etc.
There might be some benefits for "one microservice - one project" appraoch as well. For example, less dependent lifecycles, better (more accurate) security, may be simpler development workflows...
Recently I got other developer Azure function code written in C# Script (.csx), I used to write Azure function using visual studio.
I love C# Script imperative binding, it make code more easy (no need to manage connections)
I saw some problem with C# Script
Code quality tool doesn't works (StyleCop/Sonar)
Can't write unit test against .csx file
If you have a different opinion, Please share.
So I decided that I will convert all functions(10) into .net project with sonar integration and UnitTest.
Question
My most of functions not having any business logic they are getting triggers from EventHub and dumping data into cosmos DB, I'm not able to decide should I create 10 projects or 1 project under single solution?
I believe single project with multiple function having single host.json file, If I will change host.json value for scaling particular function, it will impact other function as well. Am'I right?
Number of function = number of project is a correct solution?
How it would impact cost?
Personal opinion is the CSX files are fine for experimenting or something quick and dirty but for production you should be using compiled c#.
Adjusting any setting in the host.json file will impact all functions within that function app. There is no universally correct answer on when to break out your function into separate apps but there are a few questions you can ask to help answer it for your scenario:
Does a particular function have a dramatically different scaling characteristic then the other function(s). (e.g. does one of your message triggers get very different message volume or processing logic then others - do you need to change the host.json)
Is your function doing a separate business process then the other functions (e.g. one is receiving device telemetry messages while the other is handling audit telemetry)
Do #1 and #2 justify the management and devops overhead of creating a separate function app (lots and lots of functions apps, especially in micro-service like architectures can be a challenge to manage)
In your case you have some flexibility with your function apps because they are just message listeners they aren't impacted as much as say http triggers if you find you want to break out the function into a separate app later (e.g. http endpoints changing).
Your overall idea to move to precompiled projects makes sense. That's recommended by Microsoft for all but simplest ad-hoc Functions.
Single project vs multiple projects should be decided based on whether you want a single Function App or multiple Apps. Function App is a scaling unit. If you want multiple Functions to scale independently, they should be in separate Apps and projects.
I want to write a background process in NodeJs which will process messages from a TOPIC. Reading through an array of confusing articles, there are my options
Write a webjob in NodeJS with a continuous polling mechanism. All plumbing code has to be written by me.
Write a webjob in NodeJS using azure-webjobs-sdk-script (which I think is basically a function wrapped under a webjob) and have the same trigger mechanism as a function and also advantage of webjob dashboard.
Write a function in NodeJS with bindings to TOPIC.
Is my understanding of the Role of azure-webjobs-sdk-script library correct. Is it just a wrapper for functions to run under webjob. What is the differnce between this and running functions under app service plan.
I could not find any clear definition of these options.
azure-webjobs-sdk-script (https://github.com/Azure/azure-webjobs-sdk-script) is what we refer to as the 'Functions Runtime'. In term of deploying it yourself as a WebJob vs using a Function, let's look at some Pros and Cons:
Advantages of using Functions
You can use the Consumption plan. That is a huge advantage, especially if your code only needs to run occasionally (basically, it's cheaper!)
You can use the Portal experience to develop it.
It's simpler to deploy: you only need to deploy your NodeJS function, and don't have to worry about the runtime.
The runtime get automatic updates, while in the WebJobs case you're responsible for keeping it up to date.
Advantages of using a WebJob
The main one is that you get more control. e.g. If you want to customize the script runtime, you can deploy your own custom binaries. With Functions, you always use an official runtime
Overall, I would definitely suggest giving Functions a try before you get into the more complex alternative of deploying the script runtime as a WebJob.
We are currently deploying a single Function App per environment / per region in Azure. These Function Apps contain lots of Functions inside them. With the Service Plan set to consumption and therefore dynamic we are fairly happy with this as it reduces the operational complexity in our ARM templates.
We do wonder though, if it would be best to have more "function apps" per environment and spread our functions across them?
Is there any real benefit to doing this as we are under the impression performance will be scaled by the dynamic service plan?
Jordan,
The answer to the question would really depend on the type of workloads you're handling with your functions.
Although the scale controller will handle scaling to meet demand, functions within a Function App do share resources on each instance, and a resource intensive (either memory or CPU) may impact other functions in the same app.
There is also no process isolation between functions in the same Function App. They all run in the same process (except for some of the scripting languages like Python, Batch, etc) and in the same App Domain. So if isolation is a factor (for reasons like security, dependency management, shared state, ect.), you may want to consider splitting functions into different apps.
Versioning and deployment is another factor worth considering, as the unit of deployment is a Function App (and not the individual functions)
With that said, if you're not running into resource consumption issues with your workloads and the issues mentioned above are not a concern, as you pointed out, running multiple functions in a single function app significantly simplifies management, so I wouldn't change that approach if there's no need to do so.
I hope this helps!
My main concern was already pointed out by Fabio. All your functions are running in the same process. So, if one of the functions is running into a timeout, that the host will be shut down (incl. restart of course). This would also affect your other functions.
I had this problem with a service bus trigger by calling a stored procedure, which sometimes reached the timeout threshold. The restart of my function app took about 7 minutes and the real-time data was not really real-time anymore ;-)
I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.