Recently I got other developer Azure function code written in C# Script (.csx), I used to write Azure function using visual studio.
I love C# Script imperative binding, it make code more easy (no need to manage connections)
I saw some problem with C# Script
Code quality tool doesn't works (StyleCop/Sonar)
Can't write unit test against .csx file
If you have a different opinion, Please share.
So I decided that I will convert all functions(10) into .net project with sonar integration and UnitTest.
Question
My most of functions not having any business logic they are getting triggers from EventHub and dumping data into cosmos DB, I'm not able to decide should I create 10 projects or 1 project under single solution?
I believe single project with multiple function having single host.json file, If I will change host.json value for scaling particular function, it will impact other function as well. Am'I right?
Number of function = number of project is a correct solution?
How it would impact cost?
Personal opinion is the CSX files are fine for experimenting or something quick and dirty but for production you should be using compiled c#.
Adjusting any setting in the host.json file will impact all functions within that function app. There is no universally correct answer on when to break out your function into separate apps but there are a few questions you can ask to help answer it for your scenario:
Does a particular function have a dramatically different scaling characteristic then the other function(s). (e.g. does one of your message triggers get very different message volume or processing logic then others - do you need to change the host.json)
Is your function doing a separate business process then the other functions (e.g. one is receiving device telemetry messages while the other is handling audit telemetry)
Do #1 and #2 justify the management and devops overhead of creating a separate function app (lots and lots of functions apps, especially in micro-service like architectures can be a challenge to manage)
In your case you have some flexibility with your function apps because they are just message listeners they aren't impacted as much as say http triggers if you find you want to break out the function into a separate app later (e.g. http endpoints changing).
Your overall idea to move to precompiled projects makes sense. That's recommended by Microsoft for all but simplest ad-hoc Functions.
Single project vs multiple projects should be decided based on whether you want a single Function App or multiple Apps. Function App is a scaling unit. If you want multiple Functions to scale independently, they should be in separate Apps and projects.
Related
I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.
I have multiple azure functions created. Some are related a similar functionality and others different. Let say:
1. File Movement - TimerTrigger
2. Processing - HttpTrigger
For File Movement I have 2 functions and for Processing say another 2 functions.
I have created 4 azure functions in the same project. Is it the right way?
Should I put FileMovement functions in same class file and Processing in different class file - same project/solution?
Separate project for all azure functions?
Applications settings value must be shared across all the azure functions.
I've blogged about this a while ago.
I suggest the following:
For larger solutions: Apply Domain Driven Design principles to your solution. Keep the functions which require to work together (within a bounded context, or a module within a bounded context) within one Function App. "What changes together should be deployed together."
Check the scaling requirements of the individual functions. If all functions have the same scaling behavior then they can stay in the same Function App. If some functions require different scaling than others, keep them in seperate Function App.
Personally, I like to have one function definition per class since that allows me to use nameof(FunctionClass) in the FunctionName attribute, as I descibed in this post.
Use solution folders to keep the code in the Function App structured. One of my demo projects on GitHub: DurableFunctions.Demo.DotNetCore.
Where can I find guidance on organizing a Visual Studio solution with multiple Azure Functions? Specifically, how should the project be organized?
A single Azure Function resides in a single class file. I suppose each function could be its own class file, all stored within a single project. But is this the optimal solution or do I risk future complications due to a poorly organized project / solution?
The MS Azure team will probably have a better answer but this is what has worked for us so far.
We have several function apps, each are in their own project (and solution).
Of those function apps, some have only a single function, others have multiple functions each in their own class/file.
Where we have multiple functions, it is because those functions are all related to a particular feature area of our system. Hence they work together, and for us it makes sense to maintain and deploy them as a group.
Other function apps are independent, containing only a single function doing a job unrelated to any other function. E.g. we have one timer driven function that crunches some numbers and sends a push notification as required.
Grouping the functions in this way has (so far) made sense for us as it gives us a balance between keeping our deployment relatively simple and being able to scale the 'groups' independently.
Anyway this has proved good enough for our project thus far, but I'd be interested to see if there are better ways.
I am also considering having a large solution with many functions deployed as a single unit. As much research I came across the undocumented setting FunctionsInDependencies which can be added to your startup project as below.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<FunctionsInDependencies>true</FunctionsInDependencies>
</PropertyGroup>
You also need to have a reference in your startup project to the projects that implement function app - e.g. a dummy class inheritance. Otherwise the project reference will be ignored.
This isn't very pretty to be honest. My final call will probably be to declare all the functions and bindings into the startup project and reference the implementation located in sub-projects.
Useful article: https://cosmin-vladutu.medium.com/how-to-split-up-durable-functions-in-multiple-class-projects-328b0015ed37
I want to generate Functions code (C# or NodeJS) based on business rules I have in a database and replace the Function so that it is compiled and picked up the next time the Function is invoked.
What could be the approach?
use kudu and write the Functions to the "file system"
use Functions Host Admin API (if available)
Using the Kudu VFS (File System) or ZIP APIs will be the most straight forward approach, I would recommend the latter.
There are no host APIs for function creation/management today, so that is not a viable option.
Another option, which depending on your requirements and logic, might be a better fit, is to have a function that creates those files for you. Just need to be careful to make sure you don't have delays in file writes, causing the runtime to process those files before you're done.
I want to write a background process in NodeJs which will process messages from a TOPIC. Reading through an array of confusing articles, there are my options
Write a webjob in NodeJS with a continuous polling mechanism. All plumbing code has to be written by me.
Write a webjob in NodeJS using azure-webjobs-sdk-script (which I think is basically a function wrapped under a webjob) and have the same trigger mechanism as a function and also advantage of webjob dashboard.
Write a function in NodeJS with bindings to TOPIC.
Is my understanding of the Role of azure-webjobs-sdk-script library correct. Is it just a wrapper for functions to run under webjob. What is the differnce between this and running functions under app service plan.
I could not find any clear definition of these options.
azure-webjobs-sdk-script (https://github.com/Azure/azure-webjobs-sdk-script) is what we refer to as the 'Functions Runtime'. In term of deploying it yourself as a WebJob vs using a Function, let's look at some Pros and Cons:
Advantages of using Functions
You can use the Consumption plan. That is a huge advantage, especially if your code only needs to run occasionally (basically, it's cheaper!)
You can use the Portal experience to develop it.
It's simpler to deploy: you only need to deploy your NodeJS function, and don't have to worry about the runtime.
The runtime get automatic updates, while in the WebJobs case you're responsible for keeping it up to date.
Advantages of using a WebJob
The main one is that you get more control. e.g. If you want to customize the script runtime, you can deploy your own custom binaries. With Functions, you always use an official runtime
Overall, I would definitely suggest giving Functions a try before you get into the more complex alternative of deploying the script runtime as a WebJob.