I want to write a background process in NodeJs which will process messages from a TOPIC. Reading through an array of confusing articles, there are my options
Write a webjob in NodeJS with a continuous polling mechanism. All plumbing code has to be written by me.
Write a webjob in NodeJS using azure-webjobs-sdk-script (which I think is basically a function wrapped under a webjob) and have the same trigger mechanism as a function and also advantage of webjob dashboard.
Write a function in NodeJS with bindings to TOPIC.
Is my understanding of the Role of azure-webjobs-sdk-script library correct. Is it just a wrapper for functions to run under webjob. What is the differnce between this and running functions under app service plan.
I could not find any clear definition of these options.
azure-webjobs-sdk-script (https://github.com/Azure/azure-webjobs-sdk-script) is what we refer to as the 'Functions Runtime'. In term of deploying it yourself as a WebJob vs using a Function, let's look at some Pros and Cons:
Advantages of using Functions
You can use the Consumption plan. That is a huge advantage, especially if your code only needs to run occasionally (basically, it's cheaper!)
You can use the Portal experience to develop it.
It's simpler to deploy: you only need to deploy your NodeJS function, and don't have to worry about the runtime.
The runtime get automatic updates, while in the WebJobs case you're responsible for keeping it up to date.
Advantages of using a WebJob
The main one is that you get more control. e.g. If you want to customize the script runtime, you can deploy your own custom binaries. With Functions, you always use an official runtime
Overall, I would definitely suggest giving Functions a try before you get into the more complex alternative of deploying the script runtime as a WebJob.
Related
I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.
I am currently learning for new Microsoft certification and this snippet from Azure Functions documentation caught my attention (link):
The Azure Functions Tools provides the following benefits:
Edit, build, and run functions on your local development computer.
Publish your Azure Functions project directly to Azure.
Use WebJobs attributes to declare function bindings directly in the C# code instead of maintaining a separate function.json for binding definitions.
Develop and deploy pre-compiled C# functions. Pre-complied functions provide a better cold-start performance than C# script-based functions.
Code your functions in C# while having all of the benefits of Visual Studio development.
I understand cold-start performance refers to the fact csx files have to be compiled before they are used.
I begun to wonder if there is a cost (price-wise) of compiling csx and if it exists is it even meaningful? If it is done only once for given version of Function then it shouldn't be a noticable.
I don't know if you pay for the compilation time, but I would
definitely assume so.
I do know the answer to "is it a meaningful cost".
On a consumption pricing plan, the service will typically stay "warm" for about 20 minutes after an invocation (unofficial, not guaranteed). So, if you generally invoke less than every 20 minutes then you are likely to pay the compilation cost on each invocation. But, given the cost of the time and the frequency it will add up to very little cost over time; not a cost I would personally consider meaningful.
Recently I got other developer Azure function code written in C# Script (.csx), I used to write Azure function using visual studio.
I love C# Script imperative binding, it make code more easy (no need to manage connections)
I saw some problem with C# Script
Code quality tool doesn't works (StyleCop/Sonar)
Can't write unit test against .csx file
If you have a different opinion, Please share.
So I decided that I will convert all functions(10) into .net project with sonar integration and UnitTest.
Question
My most of functions not having any business logic they are getting triggers from EventHub and dumping data into cosmos DB, I'm not able to decide should I create 10 projects or 1 project under single solution?
I believe single project with multiple function having single host.json file, If I will change host.json value for scaling particular function, it will impact other function as well. Am'I right?
Number of function = number of project is a correct solution?
How it would impact cost?
Personal opinion is the CSX files are fine for experimenting or something quick and dirty but for production you should be using compiled c#.
Adjusting any setting in the host.json file will impact all functions within that function app. There is no universally correct answer on when to break out your function into separate apps but there are a few questions you can ask to help answer it for your scenario:
Does a particular function have a dramatically different scaling characteristic then the other function(s). (e.g. does one of your message triggers get very different message volume or processing logic then others - do you need to change the host.json)
Is your function doing a separate business process then the other functions (e.g. one is receiving device telemetry messages while the other is handling audit telemetry)
Do #1 and #2 justify the management and devops overhead of creating a separate function app (lots and lots of functions apps, especially in micro-service like architectures can be a challenge to manage)
In your case you have some flexibility with your function apps because they are just message listeners they aren't impacted as much as say http triggers if you find you want to break out the function into a separate app later (e.g. http endpoints changing).
Your overall idea to move to precompiled projects makes sense. That's recommended by Microsoft for all but simplest ad-hoc Functions.
Single project vs multiple projects should be decided based on whether you want a single Function App or multiple Apps. Function App is a scaling unit. If you want multiple Functions to scale independently, they should be in separate Apps and projects.
We are currently deploying a single Function App per environment / per region in Azure. These Function Apps contain lots of Functions inside them. With the Service Plan set to consumption and therefore dynamic we are fairly happy with this as it reduces the operational complexity in our ARM templates.
We do wonder though, if it would be best to have more "function apps" per environment and spread our functions across them?
Is there any real benefit to doing this as we are under the impression performance will be scaled by the dynamic service plan?
Jordan,
The answer to the question would really depend on the type of workloads you're handling with your functions.
Although the scale controller will handle scaling to meet demand, functions within a Function App do share resources on each instance, and a resource intensive (either memory or CPU) may impact other functions in the same app.
There is also no process isolation between functions in the same Function App. They all run in the same process (except for some of the scripting languages like Python, Batch, etc) and in the same App Domain. So if isolation is a factor (for reasons like security, dependency management, shared state, ect.), you may want to consider splitting functions into different apps.
Versioning and deployment is another factor worth considering, as the unit of deployment is a Function App (and not the individual functions)
With that said, if you're not running into resource consumption issues with your workloads and the issues mentioned above are not a concern, as you pointed out, running multiple functions in a single function app significantly simplifies management, so I wouldn't change that approach if there's no need to do so.
I hope this helps!
My main concern was already pointed out by Fabio. All your functions are running in the same process. So, if one of the functions is running into a timeout, that the host will be shut down (incl. restart of course). This would also affect your other functions.
I had this problem with a service bus trigger by calling a stored procedure, which sometimes reached the timeout threshold. The restart of my function app took about 7 minutes and the real-time data was not really real-time anymore ;-)
I am working on creating a Web Job on Azure and the purpose is to handle some work load and perform server background tasks on my website.
My website has several Web API methods which are used by my website but I also want the Web Job to perform the same tasks as the Web API methods after they are finished.
My question is, should I get the Web Job to call this web API (if possible) or should I move the Web API code to a Class and have the Web API and also Web Job both call this class?
I just wondered what was normal practice here.
Thanks
I would suggest you put the common logic in a dll and have them both share that library instead of trying to get the webjob to call the webapi.
I think that will be the simple way to get what you want (plus it will help you keep things separated so they can be shared - instead of putting too much logic in your webapi).
I think it's players choice here. Both will run on the same instance(s) in Azure if you choose to deploy them that way. You can either reuse by dogfooding your API or reuse by sharing a class via a .dll. We started off mixed but eventually went with the dogfooding the API as the amount of Webjobs we are using got bigger/more complex. Here are a couple of reasons.
No coupling to the libraries/code used by the API
Easier to move the Web Job into its own solution(s) only dependent on the API and any other libraries we pick for it
Almost free API testing (Dogfooding via our own Client to the API)
We already have logging and other concerns wired up in the API (more reuse)
Both work though in reality, it really comes down to managing the dependencies and the size of app/solution is you are building.